00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3661 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3263 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.108 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.483 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.494 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.505 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.505 > git config core.sparsecheckout # timeout=10 00:00:06.514 > git read-tree -mu HEAD # timeout=10 00:00:06.531 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.548 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.549 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.659 [Pipeline] Start of Pipeline 00:00:06.673 [Pipeline] library 00:00:06.674 Loading library shm_lib@master 00:00:06.674 Library shm_lib@master is cached. Copying from home. 00:00:06.688 [Pipeline] node 00:00:06.697 Running on VM-host-SM17 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.699 [Pipeline] { 00:00:06.707 [Pipeline] catchError 00:00:06.708 [Pipeline] { 00:00:06.720 [Pipeline] wrap 00:00:06.730 [Pipeline] { 00:00:06.736 [Pipeline] stage 00:00:06.737 [Pipeline] { (Prologue) 00:00:06.751 [Pipeline] echo 00:00:06.752 Node: VM-host-SM17 00:00:06.756 [Pipeline] cleanWs 00:00:06.763 [WS-CLEANUP] Deleting project workspace... 00:00:06.763 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.769 [WS-CLEANUP] done 00:00:06.922 [Pipeline] setCustomBuildProperty 00:00:07.000 [Pipeline] httpRequest 00:00:07.030 [Pipeline] echo 00:00:07.031 Sorcerer 10.211.164.101 is alive 00:00:07.037 [Pipeline] httpRequest 00:00:07.041 HttpMethod: GET 00:00:07.041 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.042 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.059 Response Code: HTTP/1.1 200 OK 00:00:07.059 Success: Status code 200 is in the accepted range: 200,404 00:00:07.060 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:13.864 [Pipeline] sh 00:00:14.144 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.160 [Pipeline] httpRequest 00:00:14.184 [Pipeline] echo 00:00:14.186 Sorcerer 10.211.164.101 is alive 00:00:14.195 [Pipeline] httpRequest 00:00:14.200 HttpMethod: GET 00:00:14.200 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:14.201 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:14.210 Response Code: HTTP/1.1 200 OK 00:00:14.211 Success: Status code 200 is in the accepted range: 200,404 00:00:14.211 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:58.309 [Pipeline] sh 00:00:58.586 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:01.890 [Pipeline] sh 00:01:02.171 + git -C spdk log --oneline -n5 00:01:02.171 719d03c6a sock/uring: only register net impl if supported 00:01:02.171 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:02.171 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:02.171 6c7c1f57e accel: add sequence outstanding stat 00:01:02.171 3bc8e6a26 accel: add utility to put task 00:01:02.194 [Pipeline] withCredentials 00:01:02.206 > git --version # timeout=10 00:01:02.219 > git --version # 'git version 2.39.2' 00:01:02.237 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:02.239 [Pipeline] { 00:01:02.249 [Pipeline] retry 00:01:02.252 [Pipeline] { 00:01:02.270 [Pipeline] sh 00:01:02.551 + git ls-remote http://dpdk.org/git/dpdk main 00:01:02.564 [Pipeline] } 00:01:02.586 [Pipeline] // retry 00:01:02.593 [Pipeline] } 00:01:02.616 [Pipeline] // withCredentials 00:01:02.627 [Pipeline] httpRequest 00:01:02.645 [Pipeline] echo 00:01:02.647 Sorcerer 10.211.164.101 is alive 00:01:02.655 [Pipeline] httpRequest 00:01:02.659 HttpMethod: GET 00:01:02.660 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:02.660 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:02.661 Response Code: HTTP/1.1 200 OK 00:01:02.661 Success: Status code 200 is in the accepted range: 200,404 00:01:02.662 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:09.200 [Pipeline] sh 00:01:09.482 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:10.871 [Pipeline] sh 00:01:11.152 + git -C dpdk log --oneline -n5 00:01:11.152 fa8d2f7f28 version: 24.07-rc2 00:01:11.152 d4bc3c2e01 maintainers: update for cxgbe driver 00:01:11.152 2227c0ed9a maintainers: update for Microsoft drivers 00:01:11.152 8385370337 maintainers: update for Arm 00:01:11.152 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:01:11.172 [Pipeline] writeFile 00:01:11.188 [Pipeline] sh 00:01:11.468 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.478 [Pipeline] sh 00:01:11.756 + cat autorun-spdk.conf 00:01:11.756 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.756 SPDK_TEST_NVME=1 00:01:11.756 SPDK_TEST_FTL=1 00:01:11.756 SPDK_TEST_ISAL=1 00:01:11.756 SPDK_RUN_ASAN=1 00:01:11.756 SPDK_RUN_UBSAN=1 00:01:11.756 SPDK_TEST_XNVME=1 00:01:11.756 SPDK_TEST_NVME_FDP=1 00:01:11.756 SPDK_TEST_NATIVE_DPDK=main 00:01:11.756 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:11.756 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.762 RUN_NIGHTLY=1 00:01:11.764 [Pipeline] } 00:01:11.778 [Pipeline] // stage 00:01:11.793 [Pipeline] stage 00:01:11.795 [Pipeline] { (Run VM) 00:01:11.808 [Pipeline] sh 00:01:12.087 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.087 + echo 'Start stage prepare_nvme.sh' 00:01:12.087 Start stage prepare_nvme.sh 00:01:12.087 + [[ -n 6 ]] 00:01:12.087 + disk_prefix=ex6 00:01:12.087 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:12.087 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:12.087 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:12.087 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.087 ++ SPDK_TEST_NVME=1 00:01:12.087 ++ SPDK_TEST_FTL=1 00:01:12.087 ++ SPDK_TEST_ISAL=1 00:01:12.087 ++ SPDK_RUN_ASAN=1 00:01:12.087 ++ SPDK_RUN_UBSAN=1 00:01:12.087 ++ SPDK_TEST_XNVME=1 00:01:12.087 ++ SPDK_TEST_NVME_FDP=1 00:01:12.087 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:12.087 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.087 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.087 ++ RUN_NIGHTLY=1 00:01:12.087 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:12.087 + nvme_files=() 00:01:12.087 + declare -A nvme_files 00:01:12.087 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.087 + nvme_files['nvme.img']=5G 00:01:12.087 + nvme_files['nvme-cmb.img']=5G 00:01:12.087 + nvme_files['nvme-multi0.img']=4G 00:01:12.087 + nvme_files['nvme-multi1.img']=4G 00:01:12.087 + nvme_files['nvme-multi2.img']=4G 00:01:12.087 + nvme_files['nvme-openstack.img']=8G 00:01:12.087 + nvme_files['nvme-zns.img']=5G 00:01:12.087 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.087 + (( SPDK_TEST_FTL == 1 )) 00:01:12.087 + nvme_files["nvme-ftl.img"]=6G 00:01:12.087 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.087 + nvme_files["nvme-fdp.img"]=1G 00:01:12.087 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.087 + for nvme in "${!nvme_files[@]}" 00:01:12.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:12.087 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.087 + for nvme in "${!nvme_files[@]}" 00:01:12.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:12.087 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:12.087 + for nvme in "${!nvme_files[@]}" 00:01:12.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:12.087 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.087 + for nvme in "${!nvme_files[@]}" 00:01:12.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:12.087 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.087 + for nvme in "${!nvme_files[@]}" 00:01:12.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:12.087 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.087 + for nvme in "${!nvme_files[@]}" 00:01:12.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:12.087 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.087 + for nvme in "${!nvme_files[@]}" 00:01:12.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:12.087 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.345 + for nvme in "${!nvme_files[@]}" 00:01:12.345 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:12.345 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:12.345 + for nvme in "${!nvme_files[@]}" 00:01:12.345 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:12.345 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.345 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:12.345 + echo 'End stage prepare_nvme.sh' 00:01:12.345 End stage prepare_nvme.sh 00:01:12.355 [Pipeline] sh 00:01:12.633 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.633 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:12.633 00:01:12.634 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:12.634 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:12.634 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:12.634 HELP=0 00:01:12.634 DRY_RUN=0 00:01:12.634 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:12.634 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:12.634 NVME_AUTO_CREATE=0 00:01:12.634 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:12.634 NVME_CMB=,,,, 00:01:12.634 NVME_PMR=,,,, 00:01:12.634 NVME_ZNS=,,,, 00:01:12.634 NVME_MS=true,,,, 00:01:12.634 NVME_FDP=,,,on, 00:01:12.634 SPDK_VAGRANT_DISTRO=fedora38 00:01:12.634 SPDK_VAGRANT_VMCPU=10 00:01:12.634 SPDK_VAGRANT_VMRAM=12288 00:01:12.634 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.634 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:12.634 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.634 SPDK_OPENSTACK_NETWORK=0 00:01:12.634 VAGRANT_PACKAGE_BOX=0 00:01:12.634 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.634 FORCE_DISTRO=true 00:01:12.634 VAGRANT_BOX_VERSION= 00:01:12.634 EXTRA_VAGRANTFILES= 00:01:12.634 NIC_MODEL=e1000 00:01:12.634 00:01:12.634 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:12.634 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:15.977 Bringing machine 'default' up with 'libvirt' provider... 00:01:16.544 ==> default: Creating image (snapshot of base box volume). 00:01:16.544 ==> default: Creating domain with the following settings... 00:01:16.544 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720815490_2514c1c875d9b3f25370 00:01:16.544 ==> default: -- Domain type: kvm 00:01:16.544 ==> default: -- Cpus: 10 00:01:16.544 ==> default: -- Feature: acpi 00:01:16.544 ==> default: -- Feature: apic 00:01:16.544 ==> default: -- Feature: pae 00:01:16.544 ==> default: -- Memory: 12288M 00:01:16.544 ==> default: -- Memory Backing: hugepages: 00:01:16.544 ==> default: -- Management MAC: 00:01:16.544 ==> default: -- Loader: 00:01:16.544 ==> default: -- Nvram: 00:01:16.544 ==> default: -- Base box: spdk/fedora38 00:01:16.544 ==> default: -- Storage pool: default 00:01:16.544 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720815490_2514c1c875d9b3f25370.img (20G) 00:01:16.544 ==> default: -- Volume Cache: default 00:01:16.544 ==> default: -- Kernel: 00:01:16.544 ==> default: -- Initrd: 00:01:16.544 ==> default: -- Graphics Type: vnc 00:01:16.545 ==> default: -- Graphics Port: -1 00:01:16.545 ==> default: -- Graphics IP: 127.0.0.1 00:01:16.545 ==> default: -- Graphics Password: Not defined 00:01:16.545 ==> default: -- Video Type: cirrus 00:01:16.545 ==> default: -- Video VRAM: 9216 00:01:16.545 ==> default: -- Sound Type: 00:01:16.545 ==> default: -- Keymap: en-us 00:01:16.545 ==> default: -- TPM Path: 00:01:16.545 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:16.545 ==> default: -- Command line args: 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:16.545 ==> default: -> value=-drive, 00:01:16.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:16.545 ==> default: -> value=-drive, 00:01:16.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:16.545 ==> default: -> value=-drive, 00:01:16.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.545 ==> default: -> value=-drive, 00:01:16.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.545 ==> default: -> value=-drive, 00:01:16.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:16.545 ==> default: -> value=-drive, 00:01:16.545 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:16.545 ==> default: -> value=-device, 00:01:16.545 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.803 ==> default: Creating shared folders metadata... 00:01:16.803 ==> default: Starting domain. 00:01:18.181 ==> default: Waiting for domain to get an IP address... 00:01:36.324 ==> default: Waiting for SSH to become available... 00:01:36.324 ==> default: Configuring and enabling network interfaces... 00:01:39.619 default: SSH address: 192.168.121.31:22 00:01:39.619 default: SSH username: vagrant 00:01:39.619 default: SSH auth method: private key 00:01:41.565 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:49.705 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:54.972 ==> default: Mounting SSHFS shared folder... 00:01:56.349 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:56.349 ==> default: Checking Mount.. 00:01:57.281 ==> default: Folder Successfully Mounted! 00:01:57.281 ==> default: Running provisioner: file... 00:01:58.239 default: ~/.gitconfig => .gitconfig 00:01:58.496 00:01:58.496 SUCCESS! 00:01:58.496 00:01:58.496 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:58.496 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:58.496 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:58.496 00:01:58.505 [Pipeline] } 00:01:58.516 [Pipeline] // stage 00:01:58.523 [Pipeline] dir 00:01:58.523 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:58.524 [Pipeline] { 00:01:58.536 [Pipeline] catchError 00:01:58.538 [Pipeline] { 00:01:58.549 [Pipeline] sh 00:01:58.823 + vagrant ssh-config --host vagrant 00:01:58.823 + sed -ne /^Host/,$p 00:01:58.823 + tee ssh_conf 00:02:03.009 Host vagrant 00:02:03.009 HostName 192.168.121.31 00:02:03.009 User vagrant 00:02:03.009 Port 22 00:02:03.009 UserKnownHostsFile /dev/null 00:02:03.009 StrictHostKeyChecking no 00:02:03.009 PasswordAuthentication no 00:02:03.009 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:03.009 IdentitiesOnly yes 00:02:03.009 LogLevel FATAL 00:02:03.009 ForwardAgent yes 00:02:03.009 ForwardX11 yes 00:02:03.009 00:02:03.023 [Pipeline] withEnv 00:02:03.025 [Pipeline] { 00:02:03.042 [Pipeline] sh 00:02:03.322 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.322 source /etc/os-release 00:02:03.322 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.322 # Minimal, systemd-like check. 00:02:03.322 if [[ -e /.dockerenv ]]; then 00:02:03.322 # Clear garbage from the node's name: 00:02:03.322 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.322 # $HOSTNAME is the actual container id 00:02:03.322 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.322 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.322 # We can assume this is a mount from a host where container is running, 00:02:03.322 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.322 container="$(< /etc/hostname) ($agent)" 00:02:03.322 else 00:02:03.322 # Fallback 00:02:03.322 container=$agent 00:02:03.322 fi 00:02:03.322 fi 00:02:03.322 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.322 00:02:03.592 [Pipeline] } 00:02:03.611 [Pipeline] // withEnv 00:02:03.619 [Pipeline] setCustomBuildProperty 00:02:03.634 [Pipeline] stage 00:02:03.636 [Pipeline] { (Tests) 00:02:03.656 [Pipeline] sh 00:02:03.934 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.286 [Pipeline] sh 00:02:04.563 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.579 [Pipeline] timeout 00:02:04.580 Timeout set to expire in 40 min 00:02:04.581 [Pipeline] { 00:02:04.597 [Pipeline] sh 00:02:04.876 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:05.443 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:05.458 [Pipeline] sh 00:02:05.737 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:06.010 [Pipeline] sh 00:02:06.289 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.566 [Pipeline] sh 00:02:06.842 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:07.101 ++ readlink -f spdk_repo 00:02:07.101 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.101 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.101 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.101 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.101 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.101 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.101 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.101 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:07.101 + cd /home/vagrant/spdk_repo 00:02:07.101 + source /etc/os-release 00:02:07.101 ++ NAME='Fedora Linux' 00:02:07.101 ++ VERSION='38 (Cloud Edition)' 00:02:07.101 ++ ID=fedora 00:02:07.101 ++ VERSION_ID=38 00:02:07.101 ++ VERSION_CODENAME= 00:02:07.101 ++ PLATFORM_ID=platform:f38 00:02:07.101 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:07.101 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.101 ++ LOGO=fedora-logo-icon 00:02:07.101 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:07.101 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.101 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:07.101 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.101 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.101 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.101 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:07.101 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.101 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:07.101 ++ SUPPORT_END=2024-05-14 00:02:07.101 ++ VARIANT='Cloud Edition' 00:02:07.101 ++ VARIANT_ID=cloud 00:02:07.101 + uname -a 00:02:07.101 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:07.101 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:07.617 Hugepages 00:02:07.617 node hugesize free / total 00:02:07.617 node0 1048576kB 0 / 0 00:02:07.617 node0 2048kB 0 / 0 00:02:07.617 00:02:07.617 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.617 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.617 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.618 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:07.618 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:07.618 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:07.618 + rm -f /tmp/spdk-ld-path 00:02:07.618 + source autorun-spdk.conf 00:02:07.618 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.618 ++ SPDK_TEST_NVME=1 00:02:07.618 ++ SPDK_TEST_FTL=1 00:02:07.618 ++ SPDK_TEST_ISAL=1 00:02:07.618 ++ SPDK_RUN_ASAN=1 00:02:07.618 ++ SPDK_RUN_UBSAN=1 00:02:07.618 ++ SPDK_TEST_XNVME=1 00:02:07.618 ++ SPDK_TEST_NVME_FDP=1 00:02:07.618 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:07.618 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.618 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.618 ++ RUN_NIGHTLY=1 00:02:07.618 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.618 + [[ -n '' ]] 00:02:07.618 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:07.876 + for M in /var/spdk/build-*-manifest.txt 00:02:07.876 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.876 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.876 + for M in /var/spdk/build-*-manifest.txt 00:02:07.876 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.876 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.876 ++ uname 00:02:07.876 + [[ Linux == \L\i\n\u\x ]] 00:02:07.876 + sudo dmesg -T 00:02:07.876 + sudo dmesg --clear 00:02:07.876 + dmesg_pid=5887 00:02:07.876 + [[ Fedora Linux == FreeBSD ]] 00:02:07.876 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.876 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.876 + sudo dmesg -Tw 00:02:07.876 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.876 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.876 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.876 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.876 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.876 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.876 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.876 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.876 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.876 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.876 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.876 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.876 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.876 Test configuration: 00:02:07.876 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.876 SPDK_TEST_NVME=1 00:02:07.876 SPDK_TEST_FTL=1 00:02:07.876 SPDK_TEST_ISAL=1 00:02:07.876 SPDK_RUN_ASAN=1 00:02:07.876 SPDK_RUN_UBSAN=1 00:02:07.876 SPDK_TEST_XNVME=1 00:02:07.876 SPDK_TEST_NVME_FDP=1 00:02:07.876 SPDK_TEST_NATIVE_DPDK=main 00:02:07.876 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.876 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.876 RUN_NIGHTLY=1 20:19:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:07.876 20:19:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.876 20:19:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.876 20:19:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.876 20:19:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.876 20:19:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.876 20:19:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.876 20:19:01 -- paths/export.sh@5 -- $ export PATH 00:02:07.876 20:19:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.876 20:19:01 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:07.876 20:19:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:07.876 20:19:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720815541.XXXXXX 00:02:07.876 20:19:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720815541.tVfJqP 00:02:07.876 20:19:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:07.876 20:19:01 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:02:07.876 20:19:01 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:07.876 20:19:01 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:07.876 20:19:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:07.876 20:19:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.876 20:19:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:07.876 20:19:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:07.876 20:19:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.876 20:19:02 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:07.876 20:19:02 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:07.876 20:19:02 -- pm/common@17 -- $ local monitor 00:02:07.876 20:19:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.876 20:19:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.876 20:19:02 -- pm/common@25 -- $ sleep 1 00:02:07.876 20:19:02 -- pm/common@21 -- $ date +%s 00:02:07.876 20:19:02 -- pm/common@21 -- $ date +%s 00:02:07.876 20:19:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720815542 00:02:07.876 20:19:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720815542 00:02:08.135 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720815542_collect-vmstat.pm.log 00:02:08.135 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720815542_collect-cpu-load.pm.log 00:02:09.070 20:19:03 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:09.070 20:19:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.070 20:19:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.070 20:19:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.070 20:19:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.070 Fri Jul 12 08:19:03 PM UTC 2024 00:02:09.070 20:19:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.070 v24.09-pre-202-g719d03c6a 00:02:09.070 20:19:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:09.070 20:19:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:09.070 20:19:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:09.070 20:19:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.070 20:19:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.070 ************************************ 00:02:09.070 START TEST asan 00:02:09.070 ************************************ 00:02:09.070 using asan 00:02:09.070 20:19:03 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:09.070 00:02:09.070 real 0m0.000s 00:02:09.070 user 0m0.000s 00:02:09.070 sys 0m0.000s 00:02:09.070 20:19:03 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:09.070 ************************************ 00:02:09.070 END TEST asan 00:02:09.070 20:19:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.070 ************************************ 00:02:09.070 20:19:03 -- common/autotest_common.sh@1142 -- $ return 0 00:02:09.070 20:19:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.070 20:19:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.070 20:19:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:09.070 20:19:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.070 20:19:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.070 ************************************ 00:02:09.070 START TEST ubsan 00:02:09.070 ************************************ 00:02:09.070 using ubsan 00:02:09.070 20:19:03 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:09.070 00:02:09.070 real 0m0.000s 00:02:09.070 user 0m0.000s 00:02:09.070 sys 0m0.000s 00:02:09.070 ************************************ 00:02:09.070 END TEST ubsan 00:02:09.070 ************************************ 00:02:09.070 20:19:03 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:09.070 20:19:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.070 20:19:03 -- common/autotest_common.sh@1142 -- $ return 0 00:02:09.070 20:19:03 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:09.070 20:19:03 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:09.070 20:19:03 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:09.070 20:19:03 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:09.070 20:19:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.070 20:19:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.070 ************************************ 00:02:09.070 START TEST build_native_dpdk 00:02:09.070 ************************************ 00:02:09.070 20:19:03 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:09.070 fa8d2f7f28 version: 24.07-rc2 00:02:09.070 d4bc3c2e01 maintainers: update for cxgbe driver 00:02:09.070 2227c0ed9a maintainers: update for Microsoft drivers 00:02:09.070 8385370337 maintainers: update for Arm 00:02:09.070 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:09.070 20:19:03 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:09.070 20:19:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:09.071 20:19:03 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:09.071 20:19:03 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:09.071 patching file config/rte_config.h 00:02:09.071 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:09.071 20:19:03 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:09.071 20:19:03 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:09.071 20:19:03 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:09.071 20:19:03 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:09.071 20:19:03 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.336 The Meson build system 00:02:14.336 Version: 1.3.1 00:02:14.336 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:14.336 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:14.336 Build type: native build 00:02:14.336 Program cat found: YES (/usr/bin/cat) 00:02:14.336 Project name: DPDK 00:02:14.336 Project version: 24.07.0-rc2 00:02:14.336 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:14.336 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:14.336 Host machine cpu family: x86_64 00:02:14.336 Host machine cpu: x86_64 00:02:14.336 Message: ## Building in Developer Mode ## 00:02:14.336 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.336 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:14.336 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.336 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:14.336 Program cat found: YES (/usr/bin/cat) 00:02:14.336 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:14.336 Compiler for C supports arguments -march=native: YES 00:02:14.336 Checking for size of "void *" : 8 00:02:14.336 Checking for size of "void *" : 8 (cached) 00:02:14.336 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:14.336 Library m found: YES 00:02:14.336 Library numa found: YES 00:02:14.336 Has header "numaif.h" : YES 00:02:14.336 Library fdt found: NO 00:02:14.336 Library execinfo found: NO 00:02:14.336 Has header "execinfo.h" : YES 00:02:14.336 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:14.336 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.336 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.336 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.336 Run-time dependency openssl found: YES 3.0.9 00:02:14.336 Run-time dependency libpcap found: YES 1.10.4 00:02:14.336 Has header "pcap.h" with dependency libpcap: YES 00:02:14.336 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.336 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.336 Compiler for C supports arguments -Wformat: YES 00:02:14.336 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.336 Compiler for C supports arguments -Wformat-security: NO 00:02:14.336 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.336 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.336 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.336 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.336 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.336 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.336 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.336 Compiler for C supports arguments -Wundef: YES 00:02:14.336 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.336 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.336 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.336 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.336 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.336 Program objdump found: YES (/usr/bin/objdump) 00:02:14.336 Compiler for C supports arguments -mavx512f: YES 00:02:14.336 Checking if "AVX512 checking" compiles: YES 00:02:14.336 Fetching value of define "__SSE4_2__" : 1 00:02:14.336 Fetching value of define "__AES__" : 1 00:02:14.336 Fetching value of define "__AVX__" : 1 00:02:14.336 Fetching value of define "__AVX2__" : 1 00:02:14.336 Fetching value of define "__AVX512BW__" : (undefined) 00:02:14.336 Fetching value of define "__AVX512CD__" : (undefined) 00:02:14.336 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:14.336 Fetching value of define "__AVX512F__" : (undefined) 00:02:14.336 Fetching value of define "__AVX512VL__" : (undefined) 00:02:14.336 Fetching value of define "__PCLMUL__" : 1 00:02:14.336 Fetching value of define "__RDRND__" : 1 00:02:14.336 Fetching value of define "__RDSEED__" : 1 00:02:14.336 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.336 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.336 Message: lib/log: Defining dependency "log" 00:02:14.336 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.336 Message: lib/argparse: Defining dependency "argparse" 00:02:14.336 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.336 Checking for function "getentropy" : NO 00:02:14.336 Message: lib/eal: Defining dependency "eal" 00:02:14.336 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:14.336 Message: lib/ring: Defining dependency "ring" 00:02:14.336 Message: lib/rcu: Defining dependency "rcu" 00:02:14.336 Message: lib/mempool: Defining dependency "mempool" 00:02:14.336 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.336 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.336 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.336 Compiler for C supports arguments -mpclmul: YES 00:02:14.336 Compiler for C supports arguments -maes: YES 00:02:14.336 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.336 Compiler for C supports arguments -mavx512bw: YES 00:02:14.336 Compiler for C supports arguments -mavx512dq: YES 00:02:14.336 Compiler for C supports arguments -mavx512vl: YES 00:02:14.336 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.336 Compiler for C supports arguments -mavx2: YES 00:02:14.336 Compiler for C supports arguments -mavx: YES 00:02:14.336 Message: lib/net: Defining dependency "net" 00:02:14.336 Message: lib/meter: Defining dependency "meter" 00:02:14.336 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.336 Message: lib/pci: Defining dependency "pci" 00:02:14.336 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.336 Message: lib/metrics: Defining dependency "metrics" 00:02:14.336 Message: lib/hash: Defining dependency "hash" 00:02:14.336 Message: lib/timer: Defining dependency "timer" 00:02:14.336 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.336 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:14.336 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:14.336 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:14.336 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:14.336 Message: lib/acl: Defining dependency "acl" 00:02:14.336 Message: lib/bbdev: Defining dependency "bbdev" 00:02:14.336 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:14.336 Run-time dependency libelf found: YES 0.190 00:02:14.336 Message: lib/bpf: Defining dependency "bpf" 00:02:14.336 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:14.336 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.336 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.336 Message: lib/distributor: Defining dependency "distributor" 00:02:14.336 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.336 Message: lib/efd: Defining dependency "efd" 00:02:14.336 Message: lib/eventdev: Defining dependency "eventdev" 00:02:14.336 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:14.336 Message: lib/gpudev: Defining dependency "gpudev" 00:02:14.336 Message: lib/gro: Defining dependency "gro" 00:02:14.336 Message: lib/gso: Defining dependency "gso" 00:02:14.336 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:14.336 Message: lib/jobstats: Defining dependency "jobstats" 00:02:14.336 Message: lib/latencystats: Defining dependency "latencystats" 00:02:14.336 Message: lib/lpm: Defining dependency "lpm" 00:02:14.336 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.336 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.336 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:14.336 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:14.336 Message: lib/member: Defining dependency "member" 00:02:14.336 Message: lib/pcapng: Defining dependency "pcapng" 00:02:14.336 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.336 Message: lib/power: Defining dependency "power" 00:02:14.337 Message: lib/rawdev: Defining dependency "rawdev" 00:02:14.337 Message: lib/regexdev: Defining dependency "regexdev" 00:02:14.337 Message: lib/mldev: Defining dependency "mldev" 00:02:14.337 Message: lib/rib: Defining dependency "rib" 00:02:14.337 Message: lib/reorder: Defining dependency "reorder" 00:02:14.337 Message: lib/sched: Defining dependency "sched" 00:02:14.337 Message: lib/security: Defining dependency "security" 00:02:14.337 Message: lib/stack: Defining dependency "stack" 00:02:14.337 Has header "linux/userfaultfd.h" : YES 00:02:14.337 Has header "linux/vduse.h" : YES 00:02:14.337 Message: lib/vhost: Defining dependency "vhost" 00:02:14.337 Message: lib/ipsec: Defining dependency "ipsec" 00:02:14.337 Message: lib/pdcp: Defining dependency "pdcp" 00:02:14.337 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.337 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:14.337 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:14.337 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.337 Message: lib/fib: Defining dependency "fib" 00:02:14.337 Message: lib/port: Defining dependency "port" 00:02:14.337 Message: lib/pdump: Defining dependency "pdump" 00:02:14.337 Message: lib/table: Defining dependency "table" 00:02:14.337 Message: lib/pipeline: Defining dependency "pipeline" 00:02:14.337 Message: lib/graph: Defining dependency "graph" 00:02:14.337 Message: lib/node: Defining dependency "node" 00:02:14.337 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.767 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.767 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.767 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.767 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:15.767 Compiler for C supports arguments -Wno-unused-value: YES 00:02:15.767 Compiler for C supports arguments -Wno-format: YES 00:02:15.767 Compiler for C supports arguments -Wno-format-security: YES 00:02:15.767 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:15.767 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:15.767 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:15.767 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:15.767 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.767 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.767 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:15.767 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:15.767 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:15.767 Has header "sys/epoll.h" : YES 00:02:15.767 Program doxygen found: YES (/usr/bin/doxygen) 00:02:15.767 Configuring doxy-api-html.conf using configuration 00:02:15.767 Configuring doxy-api-man.conf using configuration 00:02:15.767 Program mandb found: YES (/usr/bin/mandb) 00:02:15.767 Program sphinx-build found: NO 00:02:15.767 Configuring rte_build_config.h using configuration 00:02:15.767 Message: 00:02:15.767 ================= 00:02:15.767 Applications Enabled 00:02:15.767 ================= 00:02:15.767 00:02:15.767 apps: 00:02:15.767 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:15.767 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:15.767 test-pmd, test-regex, test-sad, test-security-perf, 00:02:15.767 00:02:15.767 Message: 00:02:15.767 ================= 00:02:15.768 Libraries Enabled 00:02:15.768 ================= 00:02:15.768 00:02:15.768 libs: 00:02:15.768 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:15.768 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:15.768 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:15.768 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:15.768 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:15.768 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:15.768 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:15.768 graph, node, 00:02:15.768 00:02:15.768 Message: 00:02:15.768 =============== 00:02:15.768 Drivers Enabled 00:02:15.768 =============== 00:02:15.768 00:02:15.768 common: 00:02:15.768 00:02:15.768 bus: 00:02:15.768 pci, vdev, 00:02:15.768 mempool: 00:02:15.768 ring, 00:02:15.768 dma: 00:02:15.768 00:02:15.768 net: 00:02:15.768 i40e, 00:02:15.768 raw: 00:02:15.768 00:02:15.768 crypto: 00:02:15.768 00:02:15.768 compress: 00:02:15.768 00:02:15.768 regex: 00:02:15.768 00:02:15.768 ml: 00:02:15.768 00:02:15.768 vdpa: 00:02:15.768 00:02:15.768 event: 00:02:15.768 00:02:15.768 baseband: 00:02:15.768 00:02:15.768 gpu: 00:02:15.768 00:02:15.768 00:02:15.768 Message: 00:02:15.768 ================= 00:02:15.768 Content Skipped 00:02:15.768 ================= 00:02:15.768 00:02:15.768 apps: 00:02:15.768 00:02:15.768 libs: 00:02:15.768 00:02:15.768 drivers: 00:02:15.768 common/cpt: not in enabled drivers build config 00:02:15.768 common/dpaax: not in enabled drivers build config 00:02:15.768 common/iavf: not in enabled drivers build config 00:02:15.768 common/idpf: not in enabled drivers build config 00:02:15.768 common/ionic: not in enabled drivers build config 00:02:15.768 common/mvep: not in enabled drivers build config 00:02:15.768 common/octeontx: not in enabled drivers build config 00:02:15.768 bus/auxiliary: not in enabled drivers build config 00:02:15.768 bus/cdx: not in enabled drivers build config 00:02:15.768 bus/dpaa: not in enabled drivers build config 00:02:15.768 bus/fslmc: not in enabled drivers build config 00:02:15.768 bus/ifpga: not in enabled drivers build config 00:02:15.768 bus/platform: not in enabled drivers build config 00:02:15.768 bus/uacce: not in enabled drivers build config 00:02:15.768 bus/vmbus: not in enabled drivers build config 00:02:15.768 common/cnxk: not in enabled drivers build config 00:02:15.768 common/mlx5: not in enabled drivers build config 00:02:15.768 common/nfp: not in enabled drivers build config 00:02:15.768 common/nitrox: not in enabled drivers build config 00:02:15.768 common/qat: not in enabled drivers build config 00:02:15.768 common/sfc_efx: not in enabled drivers build config 00:02:15.768 mempool/bucket: not in enabled drivers build config 00:02:15.768 mempool/cnxk: not in enabled drivers build config 00:02:15.768 mempool/dpaa: not in enabled drivers build config 00:02:15.768 mempool/dpaa2: not in enabled drivers build config 00:02:15.768 mempool/octeontx: not in enabled drivers build config 00:02:15.768 mempool/stack: not in enabled drivers build config 00:02:15.768 dma/cnxk: not in enabled drivers build config 00:02:15.768 dma/dpaa: not in enabled drivers build config 00:02:15.768 dma/dpaa2: not in enabled drivers build config 00:02:15.768 dma/hisilicon: not in enabled drivers build config 00:02:15.768 dma/idxd: not in enabled drivers build config 00:02:15.768 dma/ioat: not in enabled drivers build config 00:02:15.768 dma/odm: not in enabled drivers build config 00:02:15.768 dma/skeleton: not in enabled drivers build config 00:02:15.768 net/af_packet: not in enabled drivers build config 00:02:15.768 net/af_xdp: not in enabled drivers build config 00:02:15.768 net/ark: not in enabled drivers build config 00:02:15.768 net/atlantic: not in enabled drivers build config 00:02:15.768 net/avp: not in enabled drivers build config 00:02:15.768 net/axgbe: not in enabled drivers build config 00:02:15.768 net/bnx2x: not in enabled drivers build config 00:02:15.768 net/bnxt: not in enabled drivers build config 00:02:15.768 net/bonding: not in enabled drivers build config 00:02:15.768 net/cnxk: not in enabled drivers build config 00:02:15.768 net/cpfl: not in enabled drivers build config 00:02:15.768 net/cxgbe: not in enabled drivers build config 00:02:15.768 net/dpaa: not in enabled drivers build config 00:02:15.768 net/dpaa2: not in enabled drivers build config 00:02:15.768 net/e1000: not in enabled drivers build config 00:02:15.768 net/ena: not in enabled drivers build config 00:02:15.768 net/enetc: not in enabled drivers build config 00:02:15.768 net/enetfec: not in enabled drivers build config 00:02:15.768 net/enic: not in enabled drivers build config 00:02:15.768 net/failsafe: not in enabled drivers build config 00:02:15.768 net/fm10k: not in enabled drivers build config 00:02:15.768 net/gve: not in enabled drivers build config 00:02:15.768 net/hinic: not in enabled drivers build config 00:02:15.768 net/hns3: not in enabled drivers build config 00:02:15.768 net/iavf: not in enabled drivers build config 00:02:15.768 net/ice: not in enabled drivers build config 00:02:15.768 net/idpf: not in enabled drivers build config 00:02:15.768 net/igc: not in enabled drivers build config 00:02:15.768 net/ionic: not in enabled drivers build config 00:02:15.768 net/ipn3ke: not in enabled drivers build config 00:02:15.768 net/ixgbe: not in enabled drivers build config 00:02:15.768 net/mana: not in enabled drivers build config 00:02:15.768 net/memif: not in enabled drivers build config 00:02:15.768 net/mlx4: not in enabled drivers build config 00:02:15.768 net/mlx5: not in enabled drivers build config 00:02:15.768 net/mvneta: not in enabled drivers build config 00:02:15.768 net/mvpp2: not in enabled drivers build config 00:02:15.768 net/netvsc: not in enabled drivers build config 00:02:15.768 net/nfb: not in enabled drivers build config 00:02:15.768 net/nfp: not in enabled drivers build config 00:02:15.768 net/ngbe: not in enabled drivers build config 00:02:15.768 net/null: not in enabled drivers build config 00:02:15.768 net/octeontx: not in enabled drivers build config 00:02:15.768 net/octeon_ep: not in enabled drivers build config 00:02:15.768 net/pcap: not in enabled drivers build config 00:02:15.768 net/pfe: not in enabled drivers build config 00:02:15.768 net/qede: not in enabled drivers build config 00:02:15.768 net/ring: not in enabled drivers build config 00:02:15.768 net/sfc: not in enabled drivers build config 00:02:15.768 net/softnic: not in enabled drivers build config 00:02:15.768 net/tap: not in enabled drivers build config 00:02:15.768 net/thunderx: not in enabled drivers build config 00:02:15.768 net/txgbe: not in enabled drivers build config 00:02:15.768 net/vdev_netvsc: not in enabled drivers build config 00:02:15.768 net/vhost: not in enabled drivers build config 00:02:15.768 net/virtio: not in enabled drivers build config 00:02:15.768 net/vmxnet3: not in enabled drivers build config 00:02:15.768 raw/cnxk_bphy: not in enabled drivers build config 00:02:15.768 raw/cnxk_gpio: not in enabled drivers build config 00:02:15.768 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:15.768 raw/ifpga: not in enabled drivers build config 00:02:15.768 raw/ntb: not in enabled drivers build config 00:02:15.768 raw/skeleton: not in enabled drivers build config 00:02:15.768 crypto/armv8: not in enabled drivers build config 00:02:15.768 crypto/bcmfs: not in enabled drivers build config 00:02:15.768 crypto/caam_jr: not in enabled drivers build config 00:02:15.768 crypto/ccp: not in enabled drivers build config 00:02:15.768 crypto/cnxk: not in enabled drivers build config 00:02:15.768 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.768 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.768 crypto/ionic: not in enabled drivers build config 00:02:15.768 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.768 crypto/mlx5: not in enabled drivers build config 00:02:15.768 crypto/mvsam: not in enabled drivers build config 00:02:15.768 crypto/nitrox: not in enabled drivers build config 00:02:15.768 crypto/null: not in enabled drivers build config 00:02:15.768 crypto/octeontx: not in enabled drivers build config 00:02:15.768 crypto/openssl: not in enabled drivers build config 00:02:15.768 crypto/scheduler: not in enabled drivers build config 00:02:15.768 crypto/uadk: not in enabled drivers build config 00:02:15.768 crypto/virtio: not in enabled drivers build config 00:02:15.768 compress/isal: not in enabled drivers build config 00:02:15.768 compress/mlx5: not in enabled drivers build config 00:02:15.768 compress/nitrox: not in enabled drivers build config 00:02:15.768 compress/octeontx: not in enabled drivers build config 00:02:15.768 compress/uadk: not in enabled drivers build config 00:02:15.768 compress/zlib: not in enabled drivers build config 00:02:15.768 regex/mlx5: not in enabled drivers build config 00:02:15.768 regex/cn9k: not in enabled drivers build config 00:02:15.768 ml/cnxk: not in enabled drivers build config 00:02:15.768 vdpa/ifc: not in enabled drivers build config 00:02:15.768 vdpa/mlx5: not in enabled drivers build config 00:02:15.768 vdpa/nfp: not in enabled drivers build config 00:02:15.768 vdpa/sfc: not in enabled drivers build config 00:02:15.768 event/cnxk: not in enabled drivers build config 00:02:15.768 event/dlb2: not in enabled drivers build config 00:02:15.768 event/dpaa: not in enabled drivers build config 00:02:15.768 event/dpaa2: not in enabled drivers build config 00:02:15.768 event/dsw: not in enabled drivers build config 00:02:15.768 event/opdl: not in enabled drivers build config 00:02:15.768 event/skeleton: not in enabled drivers build config 00:02:15.768 event/sw: not in enabled drivers build config 00:02:15.768 event/octeontx: not in enabled drivers build config 00:02:15.768 baseband/acc: not in enabled drivers build config 00:02:15.768 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:15.768 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:15.768 baseband/la12xx: not in enabled drivers build config 00:02:15.768 baseband/null: not in enabled drivers build config 00:02:15.768 baseband/turbo_sw: not in enabled drivers build config 00:02:15.768 gpu/cuda: not in enabled drivers build config 00:02:15.768 00:02:15.768 00:02:15.768 Build targets in project: 224 00:02:15.768 00:02:15.768 DPDK 24.07.0-rc2 00:02:15.768 00:02:15.768 User defined options 00:02:15.768 libdir : lib 00:02:15.768 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:15.768 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:15.768 c_link_args : 00:02:15.768 enable_docs : false 00:02:15.768 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.768 enable_kmods : false 00:02:15.769 machine : native 00:02:15.769 tests : false 00:02:15.769 00:02:15.769 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.769 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:16.026 20:19:09 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:16.026 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:16.026 [1/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:16.026 [2/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:16.286 [3/723] Linking static target lib/librte_kvargs.a 00:02:16.286 [4/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:16.286 [5/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.286 [6/723] Linking static target lib/librte_log.a 00:02:16.542 [7/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:16.542 [8/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.542 [9/723] Linking static target lib/librte_argparse.a 00:02:16.799 [10/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.799 [11/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.799 [12/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.799 [13/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.799 [14/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.799 [15/723] Linking target lib/librte_log.so.24.2 00:02:16.799 [16/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.799 [17/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.057 [18/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.057 [19/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.315 [20/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:17.315 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.315 [22/723] Linking target lib/librte_kvargs.so.24.2 00:02:17.315 [23/723] Linking target lib/librte_argparse.so.24.2 00:02:17.573 [24/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:17.573 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.573 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.831 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.831 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.831 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.831 [30/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.831 [31/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:17.831 [32/723] Linking static target lib/librte_telemetry.a 00:02:18.089 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.089 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.347 [35/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.347 [36/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.604 [37/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.604 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.604 [39/723] Linking target lib/librte_telemetry.so.24.2 00:02:18.604 [40/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.604 [41/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.604 [42/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.604 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.604 [44/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:18.604 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.604 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.604 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.604 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.169 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.426 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.426 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.426 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.426 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.426 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.684 [55/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.684 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.684 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.684 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.942 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.942 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.942 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.200 [62/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.200 [63/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.200 [64/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.458 [65/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.458 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.458 [67/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.715 [68/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.715 [69/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.715 [70/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.972 [71/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.972 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.972 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.972 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.230 [75/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.230 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.230 [77/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.230 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.488 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.745 [80/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.745 [81/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.745 [82/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.745 [83/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.746 [84/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:21.746 [85/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.003 [86/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.003 [87/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.261 [88/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.261 [89/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.261 [90/723] Linking static target lib/librte_eal.a 00:02:22.261 [91/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.261 [92/723] Linking static target lib/librte_ring.a 00:02:22.587 [93/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.587 [94/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.587 [95/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.587 [96/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.847 [97/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.847 [98/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.105 [99/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.105 [100/723] Linking static target lib/librte_mempool.a 00:02:23.105 [101/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:23.105 [102/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.105 [103/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.105 [104/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.105 [105/723] Linking static target lib/librte_rcu.a 00:02:23.363 [106/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.363 [107/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.363 [108/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.363 [109/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.363 [110/723] Linking static target lib/librte_mbuf.a 00:02:23.622 [111/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.622 [112/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.880 [113/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.880 [114/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.880 [115/723] Linking static target lib/librte_net.a 00:02:23.880 [116/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.880 [117/723] Linking static target lib/librte_meter.a 00:02:24.137 [118/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.138 [119/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.138 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.138 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.138 [122/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.138 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.396 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.963 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.963 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:25.225 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:25.225 [128/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.225 [129/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.225 [130/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:25.483 [131/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:25.483 [132/723] Linking static target lib/librte_pci.a 00:02:25.483 [133/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:25.483 [134/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.483 [135/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.741 [136/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:25.741 [137/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.741 [138/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.741 [139/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:25.741 [140/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:25.999 [141/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:25.999 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:25.999 [143/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:25.999 [144/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:25.999 [145/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:25.999 [146/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:25.999 [147/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.999 [148/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:26.257 [149/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:26.257 [150/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.257 [151/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:26.257 [152/723] Linking static target lib/librte_cmdline.a 00:02:26.516 [153/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:26.773 [154/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:26.773 [155/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:26.773 [156/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:26.773 [157/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:26.773 [158/723] Linking static target lib/librte_metrics.a 00:02:27.342 [159/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.342 [160/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.601 [161/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.601 [162/723] Linking static target lib/librte_timer.a 00:02:27.859 [163/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.117 [164/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.117 [165/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.683 [166/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:28.939 [167/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.939 [168/723] Linking static target lib/librte_ethdev.a 00:02:28.939 [169/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:29.197 [170/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:29.454 [171/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.454 [172/723] Linking static target lib/librte_hash.a 00:02:29.710 [173/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.710 [174/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:29.710 [175/723] Linking static target lib/acl/libavx2_tmp.a 00:02:29.710 [176/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:29.710 [177/723] Linking static target lib/librte_bitratestats.a 00:02:29.710 [178/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.968 [179/723] Linking target lib/librte_eal.so.24.2 00:02:29.968 [180/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:29.968 [181/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:29.968 [182/723] Linking static target lib/acl/libavx512_tmp.a 00:02:29.968 [183/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:30.224 [184/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.224 [185/723] Linking target lib/librte_ring.so.24.2 00:02:30.224 [186/723] Linking target lib/librte_meter.so.24.2 00:02:30.224 [187/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:30.224 [188/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.480 [189/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:30.480 [190/723] Linking target lib/librte_rcu.so.24.2 00:02:30.480 [191/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:30.480 [192/723] Linking target lib/librte_mempool.so.24.2 00:02:30.480 [193/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:30.480 [194/723] Linking target lib/librte_pci.so.24.2 00:02:30.480 [195/723] Linking static target lib/librte_bbdev.a 00:02:30.480 [196/723] Linking target lib/librte_timer.so.24.2 00:02:30.480 [197/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:30.480 [198/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.480 [199/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:30.480 [200/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:30.480 [201/723] Linking static target lib/librte_acl.a 00:02:30.738 [202/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:30.738 [203/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:30.738 [204/723] Linking target lib/librte_mbuf.so.24.2 00:02:30.738 [205/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:30.738 [206/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:30.995 [207/723] Linking target lib/librte_net.so.24.2 00:02:30.995 [208/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.995 [209/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:30.995 [210/723] Linking target lib/librte_acl.so.24.2 00:02:30.995 [211/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:31.253 [212/723] Linking target lib/librte_cmdline.so.24.2 00:02:31.253 [213/723] Linking target lib/librte_hash.so.24.2 00:02:31.253 [214/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.253 [215/723] Linking target lib/librte_bbdev.so.24.2 00:02:31.253 [216/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:31.253 [217/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:31.253 [218/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:31.510 [219/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:31.510 [220/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:31.510 [221/723] Linking static target lib/librte_cfgfile.a 00:02:31.510 [222/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:32.124 [223/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:32.124 [224/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.124 [225/723] Linking target lib/librte_cfgfile.so.24.2 00:02:32.124 [226/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.382 [227/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.382 [228/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.382 [229/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:32.382 [230/723] Linking static target lib/librte_bpf.a 00:02:32.640 [231/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.640 [232/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.640 [233/723] Linking static target lib/librte_compressdev.a 00:02:32.640 [234/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:32.640 [235/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:32.640 [236/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.206 [237/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:33.206 [238/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:33.206 [239/723] Linking static target lib/librte_distributor.a 00:02:33.206 [240/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.206 [241/723] Linking target lib/librte_compressdev.so.24.2 00:02:33.206 [242/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.465 [243/723] Linking target lib/librte_distributor.so.24.2 00:02:33.465 [244/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:34.030 [245/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.030 [246/723] Linking static target lib/librte_dmadev.a 00:02:34.288 [247/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:34.288 [248/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:34.288 [249/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:34.546 [250/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:34.804 [251/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.804 [252/723] Linking target lib/librte_dmadev.so.24.2 00:02:35.063 [253/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:35.063 [254/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:35.063 [255/723] Linking static target lib/librte_efd.a 00:02:35.321 [256/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.321 [257/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.321 [258/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:35.321 [259/723] Linking target lib/librte_efd.so.24.2 00:02:35.321 [260/723] Linking static target lib/librte_cryptodev.a 00:02:35.321 [261/723] Linking target lib/librte_ethdev.so.24.2 00:02:35.321 [262/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:35.322 [263/723] Linking static target lib/librte_gpudev.a 00:02:35.580 [264/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:35.580 [265/723] Linking target lib/librte_metrics.so.24.2 00:02:35.580 [266/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:35.838 [267/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:35.838 [268/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:35.838 [269/723] Linking static target lib/librte_dispatcher.a 00:02:35.838 [270/723] Linking target lib/librte_bitratestats.so.24.2 00:02:35.838 [271/723] Linking target lib/librte_bpf.so.24.2 00:02:35.838 [272/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:36.095 [273/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:36.354 [274/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:36.354 [275/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:36.354 [276/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.354 [277/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.354 [278/723] Linking target lib/librte_gpudev.so.24.2 00:02:36.354 [279/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:36.613 [280/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:36.872 [281/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:36.872 [282/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.872 [283/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:36.872 [284/723] Linking static target lib/librte_eventdev.a 00:02:36.872 [285/723] Linking target lib/librte_cryptodev.so.24.2 00:02:36.872 [286/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:36.872 [287/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:36.872 [288/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:37.129 [289/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:37.129 [290/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:37.129 [291/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:37.129 [292/723] Linking static target lib/librte_gro.a 00:02:37.129 [293/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:37.129 [294/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:37.387 [295/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.387 [296/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:37.387 [297/723] Linking static target lib/librte_gso.a 00:02:37.387 [298/723] Linking target lib/librte_gro.so.24.2 00:02:37.645 [299/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.645 [300/723] Linking target lib/librte_gso.so.24.2 00:02:37.645 [301/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:37.645 [302/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:37.903 [303/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:37.903 [304/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:37.903 [305/723] Linking static target lib/librte_jobstats.a 00:02:37.903 [306/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:37.903 [307/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:38.161 [308/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:38.161 [309/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:38.161 [310/723] Linking static target lib/librte_ip_frag.a 00:02:38.161 [311/723] Linking static target lib/librte_latencystats.a 00:02:38.161 [312/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.419 [313/723] Linking target lib/librte_jobstats.so.24.2 00:02:38.419 [314/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:38.419 [315/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:38.419 [316/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.419 [317/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:38.678 [318/723] Linking target lib/librte_latencystats.so.24.2 00:02:38.678 [319/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:38.678 [320/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.678 [321/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:38.678 [322/723] Linking target lib/librte_ip_frag.so.24.2 00:02:38.678 [323/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.678 [324/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:38.962 [325/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.220 [326/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.220 [327/723] Linking target lib/librte_eventdev.so.24.2 00:02:39.220 [328/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:39.220 [329/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.478 [330/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:39.478 [331/723] Linking target lib/librte_dispatcher.so.24.2 00:02:39.478 [332/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:39.478 [333/723] Linking static target lib/librte_pcapng.a 00:02:39.478 [334/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.737 [335/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.737 [336/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.737 [337/723] Linking target lib/librte_pcapng.so.24.2 00:02:39.994 [338/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:39.995 [339/723] Linking static target lib/librte_lpm.a 00:02:39.995 [340/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:39.995 [341/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.995 [342/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:40.252 [343/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:40.511 [344/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.511 [345/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.511 [346/723] Linking target lib/librte_lpm.so.24.2 00:02:40.511 [347/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:40.511 [348/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:40.511 [349/723] Linking static target lib/librte_member.a 00:02:40.511 [350/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:40.511 [351/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:40.511 [352/723] Linking static target lib/librte_rawdev.a 00:02:40.769 [353/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:40.769 [354/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:40.769 [355/723] Linking static target lib/librte_regexdev.a 00:02:40.769 [356/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.769 [357/723] Linking static target lib/librte_power.a 00:02:41.027 [358/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.027 [359/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:41.027 [360/723] Linking target lib/librte_member.so.24.2 00:02:41.027 [361/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.284 [362/723] Linking target lib/librte_rawdev.so.24.2 00:02:41.284 [363/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:41.284 [364/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:41.542 [365/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:41.542 [366/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.542 [367/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:41.542 [368/723] Linking target lib/librte_power.so.24.2 00:02:41.542 [369/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:41.542 [370/723] Linking static target lib/librte_mldev.a 00:02:41.800 [371/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.800 [372/723] Linking target lib/librte_regexdev.so.24.2 00:02:41.800 [373/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:41.800 [374/723] Linking static target lib/librte_reorder.a 00:02:42.058 [375/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:42.058 [376/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:42.059 [377/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:42.317 [378/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.317 [379/723] Linking target lib/librte_reorder.so.24.2 00:02:42.317 [380/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:42.317 [381/723] Linking static target lib/librte_security.a 00:02:42.317 [382/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:42.574 [383/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:42.574 [384/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.832 [385/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:42.832 [386/723] Linking static target lib/librte_rib.a 00:02:42.832 [387/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:42.832 [388/723] Linking static target lib/librte_stack.a 00:02:42.832 [389/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.832 [390/723] Linking target lib/librte_security.so.24.2 00:02:43.089 [391/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.089 [392/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.089 [393/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:43.089 [394/723] Linking static target lib/librte_sched.a 00:02:43.089 [395/723] Linking target lib/librte_stack.so.24.2 00:02:43.089 [396/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:43.089 [397/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:43.347 [398/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.347 [399/723] Linking target lib/librte_rib.so.24.2 00:02:43.604 [400/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.604 [401/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:43.604 [402/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.604 [403/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.604 [404/723] Linking target lib/librte_mldev.so.24.2 00:02:43.604 [405/723] Linking target lib/librte_sched.so.24.2 00:02:43.862 [406/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:44.119 [407/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.684 [408/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.684 [409/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.942 [410/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:45.202 [411/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:45.202 [412/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:45.202 [413/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.460 [414/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:45.460 [415/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:46.026 [416/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:46.285 [417/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:46.285 [418/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:46.285 [419/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:46.285 [420/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:46.285 [421/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:46.285 [422/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:46.285 [423/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:46.852 [424/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:46.852 [425/723] Linking static target lib/librte_ipsec.a 00:02:47.110 [426/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:47.110 [427/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:47.110 [428/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:47.368 [429/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:47.368 [430/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.368 [431/723] Linking target lib/librte_ipsec.so.24.2 00:02:47.626 [432/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:47.626 [433/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:47.626 [434/723] Linking static target lib/librte_pdcp.a 00:02:47.626 [435/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:47.626 [436/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:47.626 [437/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:47.626 [438/723] Linking static target lib/librte_fib.a 00:02:47.885 [439/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.143 [440/723] Linking target lib/librte_pdcp.so.24.2 00:02:48.143 [441/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.143 [442/723] Linking target lib/librte_fib.so.24.2 00:02:48.400 [443/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:48.400 [444/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:48.658 [445/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:48.658 [446/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:48.658 [447/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:49.225 [448/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:49.483 [449/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:49.741 [450/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:49.741 [451/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:49.741 [452/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:49.999 [453/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:49.999 [454/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:49.999 [455/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:49.999 [456/723] Linking static target lib/librte_pdump.a 00:02:50.257 [457/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.257 [458/723] Linking target lib/librte_pdump.so.24.2 00:02:50.257 [459/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:50.515 [460/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:50.515 [461/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:50.780 [462/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:50.780 [463/723] Linking static target lib/librte_port.a 00:02:50.780 [464/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:51.037 [465/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:51.038 [466/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:51.295 [467/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:51.554 [468/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.554 [469/723] Linking target lib/librte_port.so.24.2 00:02:51.554 [470/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:51.554 [471/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:51.554 [472/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:51.811 [473/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:51.811 [474/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:52.069 [475/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:52.328 [476/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:52.328 [477/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:52.328 [478/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:52.586 [479/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:52.844 [480/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:53.102 [481/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.361 [482/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:53.361 [483/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:53.361 [484/723] Linking static target lib/librte_table.a 00:02:53.619 [485/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:53.877 [486/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:53.877 [487/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:53.877 [488/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.135 [489/723] Linking target lib/librte_table.so.24.2 00:02:54.135 [490/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:54.135 [491/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:54.393 [492/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:54.393 [493/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:54.651 [494/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:54.651 [495/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:54.910 [496/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:54.910 [497/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:55.169 [498/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:55.428 [499/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:55.428 [500/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:55.694 [501/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:55.952 [502/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:55.952 [503/723] Linking static target lib/librte_graph.a 00:02:55.952 [504/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:56.210 [505/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:56.776 [506/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:56.776 [507/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.776 [508/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:56.776 [509/723] Linking target lib/librte_graph.so.24.2 00:02:56.777 [510/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:56.777 [511/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:56.777 [512/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:56.777 [513/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:56.777 [514/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.342 [515/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:57.342 [516/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:57.342 [517/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:57.600 [518/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.600 [519/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:57.600 [520/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.600 [521/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.532 [522/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:58.532 [523/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.532 [524/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.532 [525/723] Linking static target lib/librte_node.a 00:02:58.532 [526/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.532 [527/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.532 [528/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.532 [529/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.532 [530/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.532 [531/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.532 [532/723] Linking static target drivers/librte_bus_vdev.a 00:02:58.790 [533/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.790 [534/723] Linking target lib/librte_node.so.24.2 00:02:58.790 [535/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.790 [536/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:58.790 [537/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.790 [538/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.790 [539/723] Linking static target drivers/librte_bus_pci.a 00:02:58.790 [540/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.790 [541/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:59.067 [542/723] Linking target drivers/librte_bus_vdev.so.24.2 00:02:59.067 [543/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:59.067 [544/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:59.067 [545/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:59.324 [546/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:59.324 [547/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.324 [548/723] Linking static target drivers/librte_mempool_ring.a 00:02:59.324 [549/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.324 [550/723] Linking target drivers/librte_mempool_ring.so.24.2 00:02:59.324 [551/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.324 [552/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:59.324 [553/723] Linking target drivers/librte_bus_pci.so.24.2 00:02:59.581 [554/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:59.581 [555/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:03:00.147 [556/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:00.147 [557/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:00.713 [558/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:00.713 [559/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:01.279 [560/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:01.279 [561/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:01.537 [562/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:01.537 [563/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:01.537 [564/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:01.537 [565/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:01.537 [566/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:02.101 [567/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:02.101 [568/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:02.359 [569/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:02.618 [570/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:02.618 [571/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:02.618 [572/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:03.182 [573/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.182 [574/723] Linking static target lib/librte_vhost.a 00:03:03.182 [575/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:03.439 [576/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:03.439 [577/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:04.005 [578/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:04.005 [579/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:04.005 [580/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:04.005 [581/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:04.264 [582/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:04.264 [583/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.522 [584/723] Linking target lib/librte_vhost.so.24.2 00:03:04.781 [585/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:04.781 [586/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:04.781 [587/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:04.781 [588/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:04.781 [589/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:05.039 [590/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:05.039 [591/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:05.297 [592/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:05.556 [593/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:05.556 [594/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:05.556 [595/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:05.556 [596/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:05.556 [597/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:05.813 [598/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:06.377 [599/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:06.377 [600/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:06.377 [601/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:06.377 [602/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:06.377 [603/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:06.377 [604/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:06.635 [605/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:06.635 [606/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.894 [607/723] Linking static target drivers/librte_net_i40e.a 00:03:06.894 [608/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:06.894 [609/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:07.152 [610/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:07.152 [611/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:07.410 [612/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:07.410 [613/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:07.667 [614/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:07.667 [615/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.667 [616/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:07.667 [617/723] Linking target drivers/librte_net_i40e.so.24.2 00:03:07.924 [618/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:08.516 [619/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:08.516 [620/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:08.516 [621/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:08.516 [622/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:08.516 [623/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:08.516 [624/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:08.773 [625/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:09.030 [626/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:09.030 [627/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:09.287 [628/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:09.287 [629/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:09.287 [630/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:09.601 [631/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:09.859 [632/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:09.859 [633/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:09.859 [634/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:09.859 [635/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:10.793 [636/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:11.051 [637/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:11.051 [638/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:11.051 [639/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:11.051 [640/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:11.051 [641/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:11.051 [642/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:11.308 [643/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:11.308 [644/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:11.566 [645/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:11.566 [646/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:11.566 [647/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:11.824 [648/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:11.824 [649/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:12.082 [650/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:12.082 [651/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:12.340 [652/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:12.340 [653/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:12.340 [654/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:12.340 [655/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:12.907 [656/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:12.907 [657/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:12.907 [658/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:13.165 [659/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:13.165 [660/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:13.165 [661/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:13.165 [662/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:13.423 [663/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:13.423 [664/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:13.681 [665/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:13.681 [666/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:13.681 [667/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:13.939 [668/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:13.939 [669/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:14.198 [670/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:14.198 [671/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:14.456 [672/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:14.456 [673/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:14.715 [674/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:14.715 [675/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:14.715 [676/723] Linking static target lib/librte_pipeline.a 00:03:14.973 [677/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:15.231 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:15.231 [679/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:15.231 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:15.490 [681/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:15.490 [682/723] Linking target app/dpdk-dumpcap 00:03:15.748 [683/723] Linking target app/dpdk-graph 00:03:15.748 [684/723] Linking target app/dpdk-pdump 00:03:15.748 [685/723] Linking target app/dpdk-proc-info 00:03:15.748 [686/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:16.006 [687/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:16.006 [688/723] Linking target app/dpdk-test-acl 00:03:16.006 [689/723] Linking target app/dpdk-test-bbdev 00:03:16.265 [690/723] Linking target app/dpdk-test-cmdline 00:03:16.265 [691/723] Linking target app/dpdk-test-compress-perf 00:03:16.265 [692/723] Linking target app/dpdk-test-crypto-perf 00:03:16.524 [693/723] Linking target app/dpdk-test-dma-perf 00:03:16.524 [694/723] Linking target app/dpdk-test-fib 00:03:16.524 [695/723] Linking target app/dpdk-test-eventdev 00:03:16.524 [696/723] Linking target app/dpdk-test-flow-perf 00:03:16.783 [697/723] Linking target app/dpdk-test-gpudev 00:03:16.783 [698/723] Linking target app/dpdk-test-pipeline 00:03:17.041 [699/723] Linking target app/dpdk-test-mldev 00:03:17.299 [700/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:17.558 [701/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:17.558 [702/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:17.558 [703/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:17.558 [704/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:17.816 [705/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:17.816 [706/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.816 [707/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:17.816 [708/723] Linking target lib/librte_pipeline.so.24.2 00:03:18.074 [709/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:18.639 [710/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:18.639 [711/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:18.639 [712/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:18.639 [713/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:18.912 [714/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:18.912 [715/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:18.912 [716/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:19.175 [717/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:19.175 [718/723] Linking target app/dpdk-test-sad 00:03:19.175 [719/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:19.175 [720/723] Linking target app/dpdk-test-regex 00:03:19.175 [721/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:19.740 [722/723] Linking target app/dpdk-test-security-perf 00:03:19.740 [723/723] Linking target app/dpdk-testpmd 00:03:19.740 20:20:13 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:19.998 20:20:13 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:19.998 20:20:13 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:19.998 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:19.998 [0/1] Installing files. 00:03:20.258 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:20.258 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.258 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.259 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.260 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.261 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.262 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:20.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:20.263 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.263 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:20.832 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:20.832 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:20.832 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.832 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:20.832 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.832 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.833 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.834 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:20.835 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:20.835 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:20.835 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:20.835 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:20.835 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:20.835 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:20.835 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:20.835 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:20.835 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:20.835 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:20.835 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:20.835 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:20.835 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:20.835 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:20.835 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:20.835 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:20.835 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:20.835 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:20.835 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:20.835 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:20.835 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:20.835 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:20.835 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:20.835 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:20.835 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:20.835 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:20.835 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:20.835 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:20.835 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:20.835 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:20.835 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:20.835 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:20.835 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:20.835 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:20.835 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:20.835 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:20.835 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:20.835 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:20.835 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:20.835 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:20.835 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:20.835 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:20.835 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:20.835 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:20.835 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:20.835 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:20.835 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:20.835 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:20.835 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:20.835 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:20.835 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:20.835 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:20.835 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:20.835 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:20.835 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:20.835 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:20.835 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:20.835 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:20.835 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:20.836 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:20.836 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:20.836 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:20.836 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:20.836 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:20.836 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:20.836 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:20.836 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:20.836 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:20.836 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:20.836 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:20.836 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:20.836 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:20.836 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:20.836 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:20.836 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:20.836 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:20.836 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:20.836 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:20.836 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:20.836 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:20.836 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:20.836 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:20.836 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:20.836 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:20.836 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:20.836 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:20.836 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:20.836 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:20.836 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:20.836 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:20.836 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:20.836 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:20.836 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:20.836 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:20.836 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:20.836 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:20.836 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:20.836 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:20.836 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:20.836 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:20.836 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:20.836 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:20.836 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:20.836 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:20.836 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:20.836 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:20.836 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:20.836 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:20.836 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:20.836 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:20.836 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:20.836 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:20.836 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:20.836 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:20.836 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:20.836 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:20.836 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:20.836 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:20.836 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:20.836 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:20.836 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:20.836 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:20.836 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:20.836 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:20.836 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:20.836 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:20.836 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:20.836 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:20.836 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:20.836 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:20.836 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:20.836 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:20.836 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:20.836 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:20.836 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:20.836 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:20.836 20:20:14 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:20.836 20:20:14 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:20.836 00:03:20.836 real 1m11.740s 00:03:20.836 user 9m0.367s 00:03:20.836 sys 1m21.910s 00:03:20.836 20:20:14 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:20.836 ************************************ 00:03:20.836 END TEST build_native_dpdk 00:03:20.836 ************************************ 00:03:20.836 20:20:14 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:20.836 20:20:14 -- common/autotest_common.sh@1142 -- $ return 0 00:03:20.836 20:20:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:20.836 20:20:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:20.836 20:20:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:20.836 20:20:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:20.836 20:20:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:20.836 20:20:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:20.836 20:20:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:20.836 20:20:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:21.095 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:21.095 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.095 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:21.095 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:21.661 Using 'verbs' RDMA provider 00:03:34.822 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:47.070 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:47.329 Creating mk/config.mk...done. 00:03:47.329 Creating mk/cc.flags.mk...done. 00:03:47.329 Type 'make' to build. 00:03:47.329 20:20:41 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:47.329 20:20:41 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:47.329 20:20:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:47.329 20:20:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:47.329 ************************************ 00:03:47.329 START TEST make 00:03:47.329 ************************************ 00:03:47.329 20:20:41 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:47.587 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:47.587 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:47.587 meson setup builddir \ 00:03:47.587 -Dwith-libaio=enabled \ 00:03:47.587 -Dwith-liburing=enabled \ 00:03:47.587 -Dwith-libvfn=disabled \ 00:03:47.587 -Dwith-spdk=false && \ 00:03:47.587 meson compile -C builddir && \ 00:03:47.587 cd -) 00:03:47.587 make[1]: Nothing to be done for 'all'. 00:03:50.116 The Meson build system 00:03:50.116 Version: 1.3.1 00:03:50.116 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:50.116 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:50.116 Build type: native build 00:03:50.116 Project name: xnvme 00:03:50.116 Project version: 0.7.3 00:03:50.116 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:50.116 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:50.116 Host machine cpu family: x86_64 00:03:50.116 Host machine cpu: x86_64 00:03:50.116 Message: host_machine.system: linux 00:03:50.116 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:50.116 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:50.116 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:50.116 Run-time dependency threads found: YES 00:03:50.116 Has header "setupapi.h" : NO 00:03:50.116 Has header "linux/blkzoned.h" : YES 00:03:50.116 Has header "linux/blkzoned.h" : YES (cached) 00:03:50.116 Has header "libaio.h" : YES 00:03:50.116 Library aio found: YES 00:03:50.116 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:50.116 Run-time dependency liburing found: YES 2.2 00:03:50.116 Dependency libvfn skipped: feature with-libvfn disabled 00:03:50.116 Run-time dependency appleframeworks found: NO (tried framework) 00:03:50.116 Run-time dependency appleframeworks found: NO (tried framework) 00:03:50.116 Configuring xnvme_config.h using configuration 00:03:50.116 Configuring xnvme.spec using configuration 00:03:50.116 Run-time dependency bash-completion found: YES 2.11 00:03:50.116 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:50.116 Program cp found: YES (/usr/bin/cp) 00:03:50.116 Has header "winsock2.h" : NO 00:03:50.116 Has header "dbghelp.h" : NO 00:03:50.116 Library rpcrt4 found: NO 00:03:50.116 Library rt found: YES 00:03:50.116 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:50.116 Found CMake: /usr/bin/cmake (3.27.7) 00:03:50.116 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:50.116 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:50.116 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:50.116 Build targets in project: 32 00:03:50.116 00:03:50.116 xnvme 0.7.3 00:03:50.116 00:03:50.116 User defined options 00:03:50.116 with-libaio : enabled 00:03:50.116 with-liburing: enabled 00:03:50.116 with-libvfn : disabled 00:03:50.116 with-spdk : false 00:03:50.116 00:03:50.116 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:50.684 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:50.684 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:50.684 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:50.684 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:50.684 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:50.684 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:50.684 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:50.684 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:50.684 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:50.684 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:50.942 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:50.942 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:50.942 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:50.942 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:50.942 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:50.942 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:50.942 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:50.942 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:50.942 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:50.942 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:50.942 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:50.942 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:50.942 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:50.942 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:50.942 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:50.942 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:50.942 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:50.942 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:50.942 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:51.201 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:51.201 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:51.201 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:51.201 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:51.201 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:51.201 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:51.201 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:51.201 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:51.201 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:51.201 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:51.201 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:51.201 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:51.201 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:51.201 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:51.201 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:51.201 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:51.201 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:51.201 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:51.201 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:51.201 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:51.201 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:51.201 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:51.201 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:51.201 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:51.201 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:51.201 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:51.201 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:51.201 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:51.459 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:51.459 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:51.459 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:51.459 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:51.459 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:51.459 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:51.459 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:51.459 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:51.459 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:51.459 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:51.459 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:51.459 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:51.459 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:51.459 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:51.459 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:51.717 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:51.717 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:51.717 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:51.717 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:51.717 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:51.717 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:51.717 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:51.717 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:51.717 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:51.717 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:51.717 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:51.717 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:51.717 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:51.975 [85/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:51.975 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:51.975 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:51.975 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:51.975 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:51.975 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:51.975 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:51.975 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:51.975 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:51.975 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:51.975 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:51.975 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:51.975 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:51.975 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:51.975 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:51.975 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:51.975 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:51.975 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:51.975 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:51.975 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:52.234 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:52.234 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:52.234 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:52.234 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:52.234 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:52.234 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:52.234 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:52.234 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:52.234 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:52.234 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:52.234 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:52.234 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:52.234 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:52.234 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:52.234 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:52.234 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:52.234 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:52.234 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:52.234 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:52.234 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:52.234 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:52.234 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:52.234 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:52.234 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:52.234 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:52.234 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:52.234 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:52.492 [132/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:52.492 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:52.492 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:52.492 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:52.492 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:52.492 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:52.492 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:52.492 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:52.492 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:52.492 [141/203] Linking target lib/libxnvme.so 00:03:52.492 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:52.492 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:52.493 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:52.751 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:52.751 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:52.751 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:52.751 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:52.751 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:52.751 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:52.751 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:52.751 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:52.751 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:52.751 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:52.751 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:52.751 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:52.751 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:53.010 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:53.010 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:53.010 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:53.010 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:53.010 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:53.010 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:53.010 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:53.010 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:53.010 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:53.010 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:53.010 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:53.010 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:53.010 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:53.269 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:53.269 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:53.269 [173/203] Linking static target lib/libxnvme.a 00:03:53.269 [174/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:53.269 [175/203] Linking target tests/xnvme_tests_enum 00:03:53.269 [176/203] Linking target tests/xnvme_tests_lblk 00:03:53.269 [177/203] Linking target tests/xnvme_tests_buf 00:03:53.269 [178/203] Linking target tests/xnvme_tests_async_intf 00:03:53.269 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:53.269 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:03:53.269 [181/203] Linking target tests/xnvme_tests_cli 00:03:53.269 [182/203] Linking target tests/xnvme_tests_ioworker 00:03:53.269 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:53.269 [184/203] Linking target tests/xnvme_tests_znd_append 00:03:53.269 [185/203] Linking target tests/xnvme_tests_scc 00:03:53.269 [186/203] Linking target tests/xnvme_tests_kvs 00:03:53.269 [187/203] Linking target tests/xnvme_tests_map 00:03:53.269 [188/203] Linking target tests/xnvme_tests_znd_state 00:03:53.269 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:53.269 [190/203] Linking target tools/lblk 00:03:53.269 [191/203] Linking target tools/kvs 00:03:53.269 [192/203] Linking target tools/xnvme 00:03:53.269 [193/203] Linking target examples/xnvme_io_async 00:03:53.269 [194/203] Linking target tools/zoned 00:03:53.269 [195/203] Linking target examples/zoned_io_async 00:03:53.269 [196/203] Linking target examples/xnvme_enum 00:03:53.527 [197/203] Linking target tools/xdd 00:03:53.527 [198/203] Linking target tools/xnvme_file 00:03:53.527 [199/203] Linking target examples/xnvme_dev 00:03:53.527 [200/203] Linking target examples/xnvme_single_async 00:03:53.527 [201/203] Linking target examples/zoned_io_sync 00:03:53.528 [202/203] Linking target examples/xnvme_hello 00:03:53.528 [203/203] Linking target examples/xnvme_single_sync 00:03:53.528 INFO: autodetecting backend as ninja 00:03:53.528 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:53.528 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:15.499 CC lib/log/log.o 00:04:15.499 CC lib/log/log_flags.o 00:04:15.499 CC lib/log/log_deprecated.o 00:04:15.499 CC lib/ut_mock/mock.o 00:04:15.499 CC lib/ut/ut.o 00:04:15.499 LIB libspdk_ut_mock.a 00:04:15.499 SO libspdk_ut_mock.so.6.0 00:04:15.499 LIB libspdk_log.a 00:04:15.499 LIB libspdk_ut.a 00:04:15.499 SO libspdk_ut.so.2.0 00:04:15.499 SO libspdk_log.so.7.0 00:04:15.499 SYMLINK libspdk_ut_mock.so 00:04:15.499 SYMLINK libspdk_ut.so 00:04:15.499 SYMLINK libspdk_log.so 00:04:15.499 CC lib/util/base64.o 00:04:15.499 CC lib/util/cpuset.o 00:04:15.499 CC lib/util/crc16.o 00:04:15.499 CC lib/util/bit_array.o 00:04:15.499 CC lib/util/crc32c.o 00:04:15.499 CC lib/util/crc32.o 00:04:15.499 CC lib/ioat/ioat.o 00:04:15.499 CC lib/dma/dma.o 00:04:15.499 CXX lib/trace_parser/trace.o 00:04:15.499 CC lib/vfio_user/host/vfio_user_pci.o 00:04:15.499 CC lib/util/crc32_ieee.o 00:04:15.499 CC lib/util/crc64.o 00:04:15.499 CC lib/util/dif.o 00:04:15.499 CC lib/vfio_user/host/vfio_user.o 00:04:15.499 CC lib/util/fd.o 00:04:15.500 CC lib/util/file.o 00:04:15.500 LIB libspdk_dma.a 00:04:15.500 CC lib/util/hexlify.o 00:04:15.500 CC lib/util/iov.o 00:04:15.500 SO libspdk_dma.so.4.0 00:04:15.500 LIB libspdk_ioat.a 00:04:15.500 SO libspdk_ioat.so.7.0 00:04:15.500 SYMLINK libspdk_dma.so 00:04:15.500 CC lib/util/math.o 00:04:15.500 CC lib/util/pipe.o 00:04:15.500 CC lib/util/strerror_tls.o 00:04:15.500 SYMLINK libspdk_ioat.so 00:04:15.500 CC lib/util/string.o 00:04:15.500 CC lib/util/uuid.o 00:04:15.500 LIB libspdk_vfio_user.a 00:04:15.500 CC lib/util/fd_group.o 00:04:15.500 SO libspdk_vfio_user.so.5.0 00:04:15.500 CC lib/util/xor.o 00:04:15.500 CC lib/util/zipf.o 00:04:15.500 SYMLINK libspdk_vfio_user.so 00:04:15.500 LIB libspdk_util.a 00:04:15.500 LIB libspdk_trace_parser.a 00:04:15.500 SO libspdk_util.so.9.1 00:04:15.500 SO libspdk_trace_parser.so.5.0 00:04:15.500 SYMLINK libspdk_trace_parser.so 00:04:15.500 SYMLINK libspdk_util.so 00:04:15.500 CC lib/conf/conf.o 00:04:15.500 CC lib/json/json_parse.o 00:04:15.500 CC lib/json/json_util.o 00:04:15.500 CC lib/json/json_write.o 00:04:15.500 CC lib/env_dpdk/env.o 00:04:15.500 CC lib/vmd/vmd.o 00:04:15.500 CC lib/rdma_provider/common.o 00:04:15.500 CC lib/vmd/led.o 00:04:15.500 CC lib/idxd/idxd.o 00:04:15.500 CC lib/rdma_utils/rdma_utils.o 00:04:15.500 CC lib/env_dpdk/memory.o 00:04:15.758 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:15.758 LIB libspdk_conf.a 00:04:15.758 CC lib/env_dpdk/pci.o 00:04:15.758 SO libspdk_conf.so.6.0 00:04:15.758 SYMLINK libspdk_conf.so 00:04:15.758 CC lib/env_dpdk/init.o 00:04:15.758 CC lib/idxd/idxd_user.o 00:04:15.758 LIB libspdk_rdma_utils.a 00:04:15.758 LIB libspdk_json.a 00:04:16.017 SO libspdk_rdma_utils.so.1.0 00:04:16.017 SO libspdk_json.so.6.0 00:04:16.017 SYMLINK libspdk_rdma_utils.so 00:04:16.017 CC lib/env_dpdk/threads.o 00:04:16.017 SYMLINK libspdk_json.so 00:04:16.017 CC lib/env_dpdk/pci_ioat.o 00:04:16.017 LIB libspdk_rdma_provider.a 00:04:16.275 SO libspdk_rdma_provider.so.6.0 00:04:16.275 SYMLINK libspdk_rdma_provider.so 00:04:16.275 CC lib/idxd/idxd_kernel.o 00:04:16.275 CC lib/env_dpdk/pci_virtio.o 00:04:16.275 CC lib/env_dpdk/pci_vmd.o 00:04:16.275 CC lib/env_dpdk/pci_idxd.o 00:04:16.535 CC lib/env_dpdk/pci_event.o 00:04:16.535 CC lib/env_dpdk/sigbus_handler.o 00:04:16.535 LIB libspdk_vmd.a 00:04:16.535 CC lib/env_dpdk/pci_dpdk.o 00:04:16.535 SO libspdk_vmd.so.6.0 00:04:16.535 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:16.535 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:16.535 CC lib/jsonrpc/jsonrpc_server.o 00:04:16.535 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:16.535 SYMLINK libspdk_vmd.so 00:04:16.535 LIB libspdk_idxd.a 00:04:16.535 CC lib/jsonrpc/jsonrpc_client.o 00:04:16.793 SO libspdk_idxd.so.12.0 00:04:16.793 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:16.793 SYMLINK libspdk_idxd.so 00:04:17.050 LIB libspdk_jsonrpc.a 00:04:17.308 SO libspdk_jsonrpc.so.6.0 00:04:17.308 SYMLINK libspdk_jsonrpc.so 00:04:17.628 CC lib/rpc/rpc.o 00:04:17.628 LIB libspdk_env_dpdk.a 00:04:17.886 SO libspdk_env_dpdk.so.14.1 00:04:17.886 LIB libspdk_rpc.a 00:04:17.886 SO libspdk_rpc.so.6.0 00:04:17.886 SYMLINK libspdk_env_dpdk.so 00:04:17.886 SYMLINK libspdk_rpc.so 00:04:18.145 CC lib/keyring/keyring.o 00:04:18.145 CC lib/keyring/keyring_rpc.o 00:04:18.145 CC lib/trace/trace.o 00:04:18.145 CC lib/notify/notify.o 00:04:18.145 CC lib/trace/trace_rpc.o 00:04:18.145 CC lib/trace/trace_flags.o 00:04:18.145 CC lib/notify/notify_rpc.o 00:04:18.403 LIB libspdk_notify.a 00:04:18.403 SO libspdk_notify.so.6.0 00:04:18.403 SYMLINK libspdk_notify.so 00:04:18.403 LIB libspdk_trace.a 00:04:18.404 LIB libspdk_keyring.a 00:04:18.404 SO libspdk_trace.so.10.0 00:04:18.404 SO libspdk_keyring.so.1.0 00:04:18.662 SYMLINK libspdk_trace.so 00:04:18.662 SYMLINK libspdk_keyring.so 00:04:18.919 CC lib/sock/sock.o 00:04:18.919 CC lib/sock/sock_rpc.o 00:04:18.919 CC lib/thread/thread.o 00:04:18.919 CC lib/thread/iobuf.o 00:04:19.483 LIB libspdk_sock.a 00:04:19.483 SO libspdk_sock.so.10.0 00:04:19.483 SYMLINK libspdk_sock.so 00:04:19.741 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:19.741 CC lib/nvme/nvme_ctrlr.o 00:04:19.741 CC lib/nvme/nvme_ns_cmd.o 00:04:19.741 CC lib/nvme/nvme_fabric.o 00:04:19.741 CC lib/nvme/nvme_pcie_common.o 00:04:19.741 CC lib/nvme/nvme_ns.o 00:04:19.741 CC lib/nvme/nvme_pcie.o 00:04:19.741 CC lib/nvme/nvme.o 00:04:19.741 CC lib/nvme/nvme_qpair.o 00:04:20.674 CC lib/nvme/nvme_quirks.o 00:04:20.674 CC lib/nvme/nvme_transport.o 00:04:20.674 CC lib/nvme/nvme_discovery.o 00:04:20.674 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:20.932 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:20.932 CC lib/nvme/nvme_tcp.o 00:04:20.932 LIB libspdk_thread.a 00:04:20.932 SO libspdk_thread.so.10.1 00:04:20.932 CC lib/nvme/nvme_opal.o 00:04:20.932 SYMLINK libspdk_thread.so 00:04:20.932 CC lib/nvme/nvme_io_msg.o 00:04:21.190 CC lib/nvme/nvme_poll_group.o 00:04:21.449 CC lib/nvme/nvme_zns.o 00:04:21.449 CC lib/nvme/nvme_stubs.o 00:04:21.449 CC lib/nvme/nvme_auth.o 00:04:21.449 CC lib/nvme/nvme_cuse.o 00:04:21.707 CC lib/accel/accel.o 00:04:21.988 CC lib/nvme/nvme_rdma.o 00:04:21.988 CC lib/accel/accel_rpc.o 00:04:21.988 CC lib/blob/blobstore.o 00:04:22.247 CC lib/init/json_config.o 00:04:22.247 CC lib/accel/accel_sw.o 00:04:22.247 CC lib/virtio/virtio.o 00:04:22.504 CC lib/init/subsystem.o 00:04:22.762 CC lib/init/subsystem_rpc.o 00:04:22.763 CC lib/blob/request.o 00:04:22.763 CC lib/init/rpc.o 00:04:22.763 CC lib/blob/zeroes.o 00:04:23.021 CC lib/blob/blob_bs_dev.o 00:04:23.021 CC lib/virtio/virtio_vhost_user.o 00:04:23.021 LIB libspdk_init.a 00:04:23.021 SO libspdk_init.so.5.0 00:04:23.021 CC lib/virtio/virtio_vfio_user.o 00:04:23.297 CC lib/virtio/virtio_pci.o 00:04:23.297 SYMLINK libspdk_init.so 00:04:23.297 LIB libspdk_accel.a 00:04:23.560 SO libspdk_accel.so.15.1 00:04:23.560 CC lib/event/app.o 00:04:23.560 CC lib/event/reactor.o 00:04:23.560 CC lib/event/log_rpc.o 00:04:23.560 CC lib/event/app_rpc.o 00:04:23.560 CC lib/event/scheduler_static.o 00:04:23.560 SYMLINK libspdk_accel.so 00:04:23.560 LIB libspdk_virtio.a 00:04:23.560 SO libspdk_virtio.so.7.0 00:04:23.818 LIB libspdk_nvme.a 00:04:23.818 CC lib/bdev/bdev.o 00:04:23.818 CC lib/bdev/bdev_zone.o 00:04:23.818 CC lib/bdev/bdev_rpc.o 00:04:23.818 CC lib/bdev/part.o 00:04:23.818 SYMLINK libspdk_virtio.so 00:04:23.818 CC lib/bdev/scsi_nvme.o 00:04:24.075 SO libspdk_nvme.so.13.1 00:04:24.075 LIB libspdk_event.a 00:04:24.075 SO libspdk_event.so.14.0 00:04:24.334 SYMLINK libspdk_event.so 00:04:24.334 SYMLINK libspdk_nvme.so 00:04:26.864 LIB libspdk_blob.a 00:04:26.864 SO libspdk_blob.so.11.0 00:04:26.864 SYMLINK libspdk_blob.so 00:04:26.864 CC lib/lvol/lvol.o 00:04:26.864 CC lib/blobfs/blobfs.o 00:04:26.864 CC lib/blobfs/tree.o 00:04:27.428 LIB libspdk_bdev.a 00:04:27.684 SO libspdk_bdev.so.15.1 00:04:27.684 SYMLINK libspdk_bdev.so 00:04:27.943 CC lib/scsi/dev.o 00:04:27.943 CC lib/scsi/lun.o 00:04:27.943 CC lib/scsi/scsi.o 00:04:27.943 CC lib/scsi/port.o 00:04:27.943 CC lib/nbd/nbd.o 00:04:27.943 CC lib/nvmf/ctrlr.o 00:04:27.943 CC lib/ftl/ftl_core.o 00:04:27.943 CC lib/ublk/ublk.o 00:04:27.943 LIB libspdk_blobfs.a 00:04:28.200 SO libspdk_blobfs.so.10.0 00:04:28.200 LIB libspdk_lvol.a 00:04:28.200 SYMLINK libspdk_blobfs.so 00:04:28.200 CC lib/nvmf/ctrlr_discovery.o 00:04:28.200 SO libspdk_lvol.so.10.0 00:04:28.200 CC lib/nvmf/ctrlr_bdev.o 00:04:28.200 CC lib/ublk/ublk_rpc.o 00:04:28.200 CC lib/nbd/nbd_rpc.o 00:04:28.458 SYMLINK libspdk_lvol.so 00:04:28.458 CC lib/scsi/scsi_bdev.o 00:04:28.458 CC lib/ftl/ftl_init.o 00:04:28.716 CC lib/ftl/ftl_layout.o 00:04:28.716 LIB libspdk_nbd.a 00:04:28.716 CC lib/ftl/ftl_debug.o 00:04:28.716 CC lib/ftl/ftl_io.o 00:04:28.716 SO libspdk_nbd.so.7.0 00:04:28.716 SYMLINK libspdk_nbd.so 00:04:28.716 CC lib/ftl/ftl_sb.o 00:04:28.716 CC lib/ftl/ftl_l2p.o 00:04:28.973 CC lib/ftl/ftl_l2p_flat.o 00:04:28.973 CC lib/scsi/scsi_pr.o 00:04:28.973 CC lib/ftl/ftl_nv_cache.o 00:04:28.973 CC lib/ftl/ftl_band.o 00:04:29.229 CC lib/ftl/ftl_band_ops.o 00:04:29.229 CC lib/nvmf/subsystem.o 00:04:29.229 CC lib/scsi/scsi_rpc.o 00:04:29.229 CC lib/ftl/ftl_writer.o 00:04:29.229 CC lib/scsi/task.o 00:04:29.229 LIB libspdk_ublk.a 00:04:29.487 SO libspdk_ublk.so.3.0 00:04:29.487 CC lib/ftl/ftl_rq.o 00:04:29.487 SYMLINK libspdk_ublk.so 00:04:29.487 CC lib/ftl/ftl_reloc.o 00:04:29.487 CC lib/ftl/ftl_l2p_cache.o 00:04:29.487 CC lib/nvmf/nvmf.o 00:04:29.745 LIB libspdk_scsi.a 00:04:29.745 CC lib/nvmf/nvmf_rpc.o 00:04:29.745 SO libspdk_scsi.so.9.0 00:04:29.745 CC lib/nvmf/transport.o 00:04:29.745 CC lib/nvmf/tcp.o 00:04:30.002 SYMLINK libspdk_scsi.so 00:04:30.002 CC lib/nvmf/stubs.o 00:04:30.260 CC lib/ftl/ftl_p2l.o 00:04:30.518 CC lib/nvmf/mdns_server.o 00:04:30.776 CC lib/nvmf/rdma.o 00:04:31.033 CC lib/nvmf/auth.o 00:04:31.033 CC lib/ftl/mngt/ftl_mngt.o 00:04:31.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:31.291 CC lib/iscsi/conn.o 00:04:31.291 CC lib/vhost/vhost.o 00:04:31.291 CC lib/vhost/vhost_rpc.o 00:04:31.291 CC lib/vhost/vhost_scsi.o 00:04:31.550 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:31.550 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:31.550 CC lib/iscsi/init_grp.o 00:04:31.550 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:31.808 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:31.808 CC lib/iscsi/iscsi.o 00:04:32.066 CC lib/vhost/vhost_blk.o 00:04:32.066 CC lib/vhost/rte_vhost_user.o 00:04:32.066 CC lib/iscsi/md5.o 00:04:32.066 CC lib/iscsi/param.o 00:04:32.066 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:32.066 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:32.066 CC lib/iscsi/portal_grp.o 00:04:32.324 CC lib/iscsi/tgt_node.o 00:04:32.324 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:32.324 CC lib/iscsi/iscsi_subsystem.o 00:04:32.324 CC lib/iscsi/iscsi_rpc.o 00:04:32.324 CC lib/iscsi/task.o 00:04:32.324 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:32.583 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:32.583 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:32.841 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:32.841 CC lib/ftl/utils/ftl_conf.o 00:04:32.841 CC lib/ftl/utils/ftl_md.o 00:04:32.841 CC lib/ftl/utils/ftl_mempool.o 00:04:32.841 CC lib/ftl/utils/ftl_bitmap.o 00:04:32.841 CC lib/ftl/utils/ftl_property.o 00:04:33.100 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:33.100 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:33.100 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:33.100 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:33.100 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:33.100 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:33.360 LIB libspdk_vhost.a 00:04:33.360 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:33.360 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:33.360 SO libspdk_vhost.so.8.0 00:04:33.360 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:33.360 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:33.360 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:33.360 CC lib/ftl/base/ftl_base_dev.o 00:04:33.360 CC lib/ftl/base/ftl_base_bdev.o 00:04:33.360 CC lib/ftl/ftl_trace.o 00:04:33.360 SYMLINK libspdk_vhost.so 00:04:33.618 LIB libspdk_nvmf.a 00:04:33.618 LIB libspdk_iscsi.a 00:04:33.618 SO libspdk_nvmf.so.18.1 00:04:33.876 LIB libspdk_ftl.a 00:04:33.876 SO libspdk_iscsi.so.8.0 00:04:33.876 SYMLINK libspdk_iscsi.so 00:04:33.876 SYMLINK libspdk_nvmf.so 00:04:33.876 SO libspdk_ftl.so.9.0 00:04:34.441 SYMLINK libspdk_ftl.so 00:04:34.699 CC module/env_dpdk/env_dpdk_rpc.o 00:04:34.956 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:34.956 CC module/accel/ioat/accel_ioat.o 00:04:34.956 CC module/accel/dsa/accel_dsa.o 00:04:34.956 CC module/blob/bdev/blob_bdev.o 00:04:34.956 CC module/accel/error/accel_error.o 00:04:34.956 CC module/sock/posix/posix.o 00:04:34.956 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:34.956 CC module/scheduler/gscheduler/gscheduler.o 00:04:34.956 CC module/keyring/file/keyring.o 00:04:34.956 LIB libspdk_env_dpdk_rpc.a 00:04:34.956 SO libspdk_env_dpdk_rpc.so.6.0 00:04:34.956 SYMLINK libspdk_env_dpdk_rpc.so 00:04:34.956 CC module/accel/dsa/accel_dsa_rpc.o 00:04:34.956 CC module/keyring/file/keyring_rpc.o 00:04:34.956 CC module/accel/ioat/accel_ioat_rpc.o 00:04:34.956 CC module/accel/error/accel_error_rpc.o 00:04:34.956 LIB libspdk_scheduler_dynamic.a 00:04:34.956 LIB libspdk_scheduler_gscheduler.a 00:04:34.956 LIB libspdk_scheduler_dpdk_governor.a 00:04:34.956 SO libspdk_scheduler_gscheduler.so.4.0 00:04:34.956 SO libspdk_scheduler_dynamic.so.4.0 00:04:35.215 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:35.215 LIB libspdk_blob_bdev.a 00:04:35.215 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:35.215 SYMLINK libspdk_scheduler_gscheduler.so 00:04:35.215 SYMLINK libspdk_scheduler_dynamic.so 00:04:35.215 LIB libspdk_accel_dsa.a 00:04:35.215 SO libspdk_blob_bdev.so.11.0 00:04:35.215 LIB libspdk_keyring_file.a 00:04:35.215 LIB libspdk_accel_ioat.a 00:04:35.215 LIB libspdk_accel_error.a 00:04:35.215 SO libspdk_accel_dsa.so.5.0 00:04:35.215 SO libspdk_accel_ioat.so.6.0 00:04:35.215 SO libspdk_keyring_file.so.1.0 00:04:35.215 SO libspdk_accel_error.so.2.0 00:04:35.215 SYMLINK libspdk_blob_bdev.so 00:04:35.215 SYMLINK libspdk_accel_dsa.so 00:04:35.215 SYMLINK libspdk_keyring_file.so 00:04:35.215 SYMLINK libspdk_accel_ioat.so 00:04:35.215 SYMLINK libspdk_accel_error.so 00:04:35.215 CC module/accel/iaa/accel_iaa.o 00:04:35.215 CC module/accel/iaa/accel_iaa_rpc.o 00:04:35.215 CC module/keyring/linux/keyring.o 00:04:35.215 CC module/keyring/linux/keyring_rpc.o 00:04:35.473 LIB libspdk_keyring_linux.a 00:04:35.473 CC module/bdev/delay/vbdev_delay.o 00:04:35.473 CC module/bdev/gpt/gpt.o 00:04:35.473 CC module/bdev/error/vbdev_error.o 00:04:35.473 CC module/bdev/lvol/vbdev_lvol.o 00:04:35.473 CC module/blobfs/bdev/blobfs_bdev.o 00:04:35.473 SO libspdk_keyring_linux.so.1.0 00:04:35.473 LIB libspdk_accel_iaa.a 00:04:35.473 SO libspdk_accel_iaa.so.3.0 00:04:35.731 SYMLINK libspdk_keyring_linux.so 00:04:35.731 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:35.731 CC module/bdev/malloc/bdev_malloc.o 00:04:35.731 CC module/bdev/null/bdev_null.o 00:04:35.731 SYMLINK libspdk_accel_iaa.so 00:04:35.731 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:35.731 CC module/bdev/gpt/vbdev_gpt.o 00:04:35.731 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:35.731 LIB libspdk_sock_posix.a 00:04:35.731 SO libspdk_sock_posix.so.6.0 00:04:35.731 CC module/bdev/error/vbdev_error_rpc.o 00:04:35.989 SYMLINK libspdk_sock_posix.so 00:04:35.989 CC module/bdev/null/bdev_null_rpc.o 00:04:35.989 LIB libspdk_blobfs_bdev.a 00:04:35.989 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:35.989 SO libspdk_blobfs_bdev.so.6.0 00:04:35.989 CC module/bdev/nvme/bdev_nvme.o 00:04:35.989 LIB libspdk_bdev_error.a 00:04:35.989 LIB libspdk_bdev_gpt.a 00:04:35.989 SYMLINK libspdk_blobfs_bdev.so 00:04:35.989 LIB libspdk_bdev_malloc.a 00:04:35.989 SO libspdk_bdev_error.so.6.0 00:04:35.989 SO libspdk_bdev_gpt.so.6.0 00:04:36.262 SO libspdk_bdev_malloc.so.6.0 00:04:36.262 LIB libspdk_bdev_null.a 00:04:36.262 LIB libspdk_bdev_lvol.a 00:04:36.262 CC module/bdev/passthru/vbdev_passthru.o 00:04:36.262 LIB libspdk_bdev_delay.a 00:04:36.262 SO libspdk_bdev_null.so.6.0 00:04:36.262 SYMLINK libspdk_bdev_error.so 00:04:36.262 SYMLINK libspdk_bdev_gpt.so 00:04:36.262 SO libspdk_bdev_lvol.so.6.0 00:04:36.262 SO libspdk_bdev_delay.so.6.0 00:04:36.262 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:36.262 CC module/bdev/nvme/nvme_rpc.o 00:04:36.262 SYMLINK libspdk_bdev_malloc.so 00:04:36.262 CC module/bdev/nvme/bdev_mdns_client.o 00:04:36.262 SYMLINK libspdk_bdev_null.so 00:04:36.262 CC module/bdev/raid/bdev_raid.o 00:04:36.262 CC module/bdev/split/vbdev_split.o 00:04:36.262 SYMLINK libspdk_bdev_lvol.so 00:04:36.262 SYMLINK libspdk_bdev_delay.so 00:04:36.262 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:36.532 CC module/bdev/nvme/vbdev_opal.o 00:04:36.532 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:36.532 CC module/bdev/xnvme/bdev_xnvme.o 00:04:36.532 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:36.532 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:36.532 LIB libspdk_bdev_passthru.a 00:04:36.532 CC module/bdev/split/vbdev_split_rpc.o 00:04:36.532 SO libspdk_bdev_passthru.so.6.0 00:04:36.532 SYMLINK libspdk_bdev_passthru.so 00:04:36.532 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:36.790 CC module/bdev/raid/bdev_raid_rpc.o 00:04:36.790 LIB libspdk_bdev_split.a 00:04:36.790 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:36.790 LIB libspdk_bdev_xnvme.a 00:04:36.790 SO libspdk_bdev_split.so.6.0 00:04:36.790 SO libspdk_bdev_xnvme.so.3.0 00:04:36.790 CC module/bdev/aio/bdev_aio.o 00:04:36.790 SYMLINK libspdk_bdev_split.so 00:04:36.790 CC module/bdev/aio/bdev_aio_rpc.o 00:04:36.790 CC module/bdev/raid/bdev_raid_sb.o 00:04:36.790 SYMLINK libspdk_bdev_xnvme.so 00:04:36.790 CC module/bdev/raid/raid0.o 00:04:36.790 LIB libspdk_bdev_zone_block.a 00:04:37.048 SO libspdk_bdev_zone_block.so.6.0 00:04:37.048 CC module/bdev/raid/raid1.o 00:04:37.048 SYMLINK libspdk_bdev_zone_block.so 00:04:37.048 CC module/bdev/raid/concat.o 00:04:37.048 CC module/bdev/ftl/bdev_ftl.o 00:04:37.048 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:37.306 CC module/bdev/iscsi/bdev_iscsi.o 00:04:37.306 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:37.306 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:37.306 LIB libspdk_bdev_aio.a 00:04:37.306 SO libspdk_bdev_aio.so.6.0 00:04:37.306 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:37.306 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:37.306 SYMLINK libspdk_bdev_aio.so 00:04:37.306 LIB libspdk_bdev_ftl.a 00:04:37.564 SO libspdk_bdev_ftl.so.6.0 00:04:37.564 SYMLINK libspdk_bdev_ftl.so 00:04:37.564 LIB libspdk_bdev_raid.a 00:04:37.564 SO libspdk_bdev_raid.so.6.0 00:04:37.564 LIB libspdk_bdev_iscsi.a 00:04:37.564 SO libspdk_bdev_iscsi.so.6.0 00:04:37.564 SYMLINK libspdk_bdev_raid.so 00:04:37.822 SYMLINK libspdk_bdev_iscsi.so 00:04:37.822 LIB libspdk_bdev_virtio.a 00:04:37.822 SO libspdk_bdev_virtio.so.6.0 00:04:38.081 SYMLINK libspdk_bdev_virtio.so 00:04:39.017 LIB libspdk_bdev_nvme.a 00:04:39.017 SO libspdk_bdev_nvme.so.7.0 00:04:39.275 SYMLINK libspdk_bdev_nvme.so 00:04:39.878 CC module/event/subsystems/scheduler/scheduler.o 00:04:39.878 CC module/event/subsystems/sock/sock.o 00:04:39.878 CC module/event/subsystems/keyring/keyring.o 00:04:39.878 CC module/event/subsystems/iobuf/iobuf.o 00:04:39.878 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:39.878 CC module/event/subsystems/vmd/vmd.o 00:04:39.878 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:39.878 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:39.878 LIB libspdk_event_sock.a 00:04:39.878 LIB libspdk_event_keyring.a 00:04:39.878 LIB libspdk_event_vhost_blk.a 00:04:39.878 LIB libspdk_event_scheduler.a 00:04:39.878 LIB libspdk_event_vmd.a 00:04:39.878 SO libspdk_event_scheduler.so.4.0 00:04:39.878 SO libspdk_event_sock.so.5.0 00:04:39.878 SO libspdk_event_keyring.so.1.0 00:04:39.878 SO libspdk_event_vhost_blk.so.3.0 00:04:39.878 LIB libspdk_event_iobuf.a 00:04:39.878 SO libspdk_event_vmd.so.6.0 00:04:39.878 SO libspdk_event_iobuf.so.3.0 00:04:39.878 SYMLINK libspdk_event_sock.so 00:04:39.878 SYMLINK libspdk_event_scheduler.so 00:04:39.878 SYMLINK libspdk_event_keyring.so 00:04:39.878 SYMLINK libspdk_event_vmd.so 00:04:39.878 SYMLINK libspdk_event_vhost_blk.so 00:04:40.137 SYMLINK libspdk_event_iobuf.so 00:04:40.396 CC module/event/subsystems/accel/accel.o 00:04:40.396 LIB libspdk_event_accel.a 00:04:40.655 SO libspdk_event_accel.so.6.0 00:04:40.655 SYMLINK libspdk_event_accel.so 00:04:40.914 CC module/event/subsystems/bdev/bdev.o 00:04:41.171 LIB libspdk_event_bdev.a 00:04:41.171 SO libspdk_event_bdev.so.6.0 00:04:41.171 SYMLINK libspdk_event_bdev.so 00:04:41.429 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:41.429 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:41.429 CC module/event/subsystems/ublk/ublk.o 00:04:41.429 CC module/event/subsystems/scsi/scsi.o 00:04:41.429 CC module/event/subsystems/nbd/nbd.o 00:04:41.686 LIB libspdk_event_ublk.a 00:04:41.686 LIB libspdk_event_scsi.a 00:04:41.686 SO libspdk_event_ublk.so.3.0 00:04:41.686 LIB libspdk_event_nbd.a 00:04:41.686 SO libspdk_event_scsi.so.6.0 00:04:41.686 SO libspdk_event_nbd.so.6.0 00:04:41.686 SYMLINK libspdk_event_ublk.so 00:04:41.686 SYMLINK libspdk_event_scsi.so 00:04:41.686 LIB libspdk_event_nvmf.a 00:04:41.686 SO libspdk_event_nvmf.so.6.0 00:04:41.686 SYMLINK libspdk_event_nbd.so 00:04:41.943 SYMLINK libspdk_event_nvmf.so 00:04:41.943 CC module/event/subsystems/iscsi/iscsi.o 00:04:41.943 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:42.200 LIB libspdk_event_vhost_scsi.a 00:04:42.200 LIB libspdk_event_iscsi.a 00:04:42.200 SO libspdk_event_vhost_scsi.so.3.0 00:04:42.200 SO libspdk_event_iscsi.so.6.0 00:04:42.200 SYMLINK libspdk_event_vhost_scsi.so 00:04:42.200 SYMLINK libspdk_event_iscsi.so 00:04:42.457 SO libspdk.so.6.0 00:04:42.457 SYMLINK libspdk.so 00:04:42.714 CXX app/trace/trace.o 00:04:42.714 TEST_HEADER include/spdk/accel.h 00:04:42.714 CC test/rpc_client/rpc_client_test.o 00:04:42.714 TEST_HEADER include/spdk/accel_module.h 00:04:42.714 TEST_HEADER include/spdk/assert.h 00:04:42.714 TEST_HEADER include/spdk/barrier.h 00:04:42.714 TEST_HEADER include/spdk/base64.h 00:04:42.714 TEST_HEADER include/spdk/bdev.h 00:04:42.714 TEST_HEADER include/spdk/bdev_module.h 00:04:42.714 TEST_HEADER include/spdk/bdev_zone.h 00:04:42.714 TEST_HEADER include/spdk/bit_array.h 00:04:42.714 TEST_HEADER include/spdk/bit_pool.h 00:04:42.714 TEST_HEADER include/spdk/blob_bdev.h 00:04:42.714 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:42.714 TEST_HEADER include/spdk/blobfs.h 00:04:42.714 TEST_HEADER include/spdk/blob.h 00:04:42.714 TEST_HEADER include/spdk/conf.h 00:04:42.714 TEST_HEADER include/spdk/config.h 00:04:42.714 TEST_HEADER include/spdk/cpuset.h 00:04:42.714 TEST_HEADER include/spdk/crc16.h 00:04:42.714 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:42.714 TEST_HEADER include/spdk/crc32.h 00:04:42.714 TEST_HEADER include/spdk/crc64.h 00:04:42.714 TEST_HEADER include/spdk/dif.h 00:04:42.714 TEST_HEADER include/spdk/dma.h 00:04:42.714 TEST_HEADER include/spdk/endian.h 00:04:42.714 TEST_HEADER include/spdk/env_dpdk.h 00:04:42.714 TEST_HEADER include/spdk/env.h 00:04:42.714 TEST_HEADER include/spdk/event.h 00:04:42.714 TEST_HEADER include/spdk/fd_group.h 00:04:42.714 TEST_HEADER include/spdk/fd.h 00:04:42.714 TEST_HEADER include/spdk/file.h 00:04:42.714 TEST_HEADER include/spdk/ftl.h 00:04:42.714 TEST_HEADER include/spdk/gpt_spec.h 00:04:42.714 CC examples/ioat/perf/perf.o 00:04:42.714 CC examples/util/zipf/zipf.o 00:04:42.714 CC test/thread/poller_perf/poller_perf.o 00:04:42.714 TEST_HEADER include/spdk/hexlify.h 00:04:42.714 TEST_HEADER include/spdk/histogram_data.h 00:04:42.714 TEST_HEADER include/spdk/idxd.h 00:04:42.714 TEST_HEADER include/spdk/idxd_spec.h 00:04:42.714 TEST_HEADER include/spdk/init.h 00:04:42.714 TEST_HEADER include/spdk/ioat.h 00:04:42.714 TEST_HEADER include/spdk/ioat_spec.h 00:04:42.714 TEST_HEADER include/spdk/iscsi_spec.h 00:04:42.715 TEST_HEADER include/spdk/json.h 00:04:42.715 TEST_HEADER include/spdk/jsonrpc.h 00:04:42.715 TEST_HEADER include/spdk/keyring.h 00:04:42.715 TEST_HEADER include/spdk/keyring_module.h 00:04:42.715 TEST_HEADER include/spdk/likely.h 00:04:42.715 TEST_HEADER include/spdk/log.h 00:04:42.715 CC test/dma/test_dma/test_dma.o 00:04:42.715 TEST_HEADER include/spdk/lvol.h 00:04:42.715 TEST_HEADER include/spdk/memory.h 00:04:42.715 TEST_HEADER include/spdk/mmio.h 00:04:42.715 TEST_HEADER include/spdk/nbd.h 00:04:42.715 TEST_HEADER include/spdk/notify.h 00:04:42.715 TEST_HEADER include/spdk/nvme.h 00:04:42.715 TEST_HEADER include/spdk/nvme_intel.h 00:04:42.715 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:42.715 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:42.715 TEST_HEADER include/spdk/nvme_spec.h 00:04:42.715 TEST_HEADER include/spdk/nvme_zns.h 00:04:42.715 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:42.715 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:42.715 TEST_HEADER include/spdk/nvmf.h 00:04:42.715 TEST_HEADER include/spdk/nvmf_spec.h 00:04:42.715 CC test/app/bdev_svc/bdev_svc.o 00:04:42.715 TEST_HEADER include/spdk/nvmf_transport.h 00:04:42.715 TEST_HEADER include/spdk/opal.h 00:04:42.715 TEST_HEADER include/spdk/opal_spec.h 00:04:42.715 TEST_HEADER include/spdk/pci_ids.h 00:04:42.715 TEST_HEADER include/spdk/pipe.h 00:04:42.715 TEST_HEADER include/spdk/queue.h 00:04:42.715 TEST_HEADER include/spdk/reduce.h 00:04:42.715 TEST_HEADER include/spdk/rpc.h 00:04:42.715 TEST_HEADER include/spdk/scheduler.h 00:04:42.971 TEST_HEADER include/spdk/scsi.h 00:04:42.971 TEST_HEADER include/spdk/scsi_spec.h 00:04:42.971 TEST_HEADER include/spdk/sock.h 00:04:42.971 TEST_HEADER include/spdk/stdinc.h 00:04:42.971 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.971 TEST_HEADER include/spdk/string.h 00:04:42.971 TEST_HEADER include/spdk/thread.h 00:04:42.971 TEST_HEADER include/spdk/trace.h 00:04:42.971 TEST_HEADER include/spdk/trace_parser.h 00:04:42.971 TEST_HEADER include/spdk/tree.h 00:04:42.971 TEST_HEADER include/spdk/ublk.h 00:04:42.971 TEST_HEADER include/spdk/util.h 00:04:42.971 TEST_HEADER include/spdk/uuid.h 00:04:42.971 LINK rpc_client_test 00:04:42.972 TEST_HEADER include/spdk/version.h 00:04:42.972 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:42.972 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:42.972 TEST_HEADER include/spdk/vhost.h 00:04:42.972 TEST_HEADER include/spdk/vmd.h 00:04:42.972 LINK poller_perf 00:04:42.972 TEST_HEADER include/spdk/xor.h 00:04:42.972 TEST_HEADER include/spdk/zipf.h 00:04:42.972 CXX test/cpp_headers/accel.o 00:04:42.972 LINK zipf 00:04:42.972 LINK interrupt_tgt 00:04:42.972 LINK ioat_perf 00:04:42.972 LINK bdev_svc 00:04:43.228 LINK spdk_trace 00:04:43.228 CXX test/cpp_headers/accel_module.o 00:04:43.228 CC app/trace_record/trace_record.o 00:04:43.228 LINK test_dma 00:04:43.228 CC examples/ioat/verify/verify.o 00:04:43.228 CC test/event/reactor/reactor.o 00:04:43.228 CC test/event/event_perf/event_perf.o 00:04:43.485 CXX test/cpp_headers/assert.o 00:04:43.485 CC examples/thread/thread/thread_ex.o 00:04:43.485 CC test/event/reactor_perf/reactor_perf.o 00:04:43.485 CXX test/cpp_headers/barrier.o 00:04:43.485 LINK reactor 00:04:43.485 LINK event_perf 00:04:43.485 LINK spdk_trace_record 00:04:43.485 LINK mem_callbacks 00:04:43.485 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:43.485 LINK verify 00:04:43.743 LINK reactor_perf 00:04:43.743 CXX test/cpp_headers/base64.o 00:04:43.743 LINK thread 00:04:43.743 CC test/accel/dif/dif.o 00:04:43.743 CC test/event/app_repeat/app_repeat.o 00:04:43.743 CC test/env/vtophys/vtophys.o 00:04:43.743 CC app/nvmf_tgt/nvmf_main.o 00:04:43.743 CXX test/cpp_headers/bdev.o 00:04:44.001 CC test/app/histogram_perf/histogram_perf.o 00:04:44.001 CC test/event/scheduler/scheduler.o 00:04:44.001 CC test/blobfs/mkfs/mkfs.o 00:04:44.001 LINK vtophys 00:04:44.001 LINK app_repeat 00:04:44.001 LINK nvmf_tgt 00:04:44.001 LINK histogram_perf 00:04:44.001 CXX test/cpp_headers/bdev_module.o 00:04:44.001 LINK nvme_fuzz 00:04:44.001 CC examples/sock/hello_world/hello_sock.o 00:04:44.258 LINK mkfs 00:04:44.258 LINK scheduler 00:04:44.258 CXX test/cpp_headers/bdev_zone.o 00:04:44.258 CXX test/cpp_headers/bit_array.o 00:04:44.258 CXX test/cpp_headers/bit_pool.o 00:04:44.258 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.258 CXX test/cpp_headers/blob_bdev.o 00:04:44.258 LINK dif 00:04:44.258 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:44.516 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.516 CXX test/cpp_headers/blobfs_bdev.o 00:04:44.516 CXX test/cpp_headers/blobfs.o 00:04:44.516 CXX test/cpp_headers/blob.o 00:04:44.516 LINK env_dpdk_post_init 00:04:44.516 LINK hello_sock 00:04:44.516 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:44.516 CXX test/cpp_headers/conf.o 00:04:44.516 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:44.516 LINK iscsi_tgt 00:04:44.774 CXX test/cpp_headers/config.o 00:04:44.774 CC test/app/jsoncat/jsoncat.o 00:04:44.774 CXX test/cpp_headers/cpuset.o 00:04:44.774 CC test/env/memory/memory_ut.o 00:04:44.774 CC app/spdk_tgt/spdk_tgt.o 00:04:44.774 CC test/app/stub/stub.o 00:04:44.774 CC examples/vmd/lsvmd/lsvmd.o 00:04:44.774 CC examples/vmd/led/led.o 00:04:44.774 LINK jsoncat 00:04:45.032 CXX test/cpp_headers/crc16.o 00:04:45.032 CC app/spdk_lspci/spdk_lspci.o 00:04:45.032 LINK lsvmd 00:04:45.032 LINK led 00:04:45.032 LINK stub 00:04:45.032 LINK spdk_tgt 00:04:45.032 CXX test/cpp_headers/crc32.o 00:04:45.032 LINK spdk_lspci 00:04:45.032 LINK vhost_fuzz 00:04:45.032 CXX test/cpp_headers/crc64.o 00:04:45.289 CXX test/cpp_headers/dif.o 00:04:45.289 CXX test/cpp_headers/dma.o 00:04:45.289 CXX test/cpp_headers/endian.o 00:04:45.289 CXX test/cpp_headers/env_dpdk.o 00:04:45.289 CC examples/idxd/perf/perf.o 00:04:45.289 CC app/spdk_nvme_perf/perf.o 00:04:45.289 CXX test/cpp_headers/env.o 00:04:45.289 CXX test/cpp_headers/event.o 00:04:45.289 CC test/lvol/esnap/esnap.o 00:04:45.547 CXX test/cpp_headers/fd_group.o 00:04:45.547 CC test/nvme/aer/aer.o 00:04:45.547 CC test/bdev/bdevio/bdevio.o 00:04:45.547 CXX test/cpp_headers/fd.o 00:04:45.547 CC app/spdk_nvme_identify/identify.o 00:04:45.547 CC app/spdk_nvme_discover/discovery_aer.o 00:04:45.806 LINK idxd_perf 00:04:45.806 CXX test/cpp_headers/file.o 00:04:45.806 LINK aer 00:04:45.806 LINK spdk_nvme_discover 00:04:46.063 CXX test/cpp_headers/ftl.o 00:04:46.063 LINK memory_ut 00:04:46.063 CC examples/accel/perf/accel_perf.o 00:04:46.063 LINK bdevio 00:04:46.063 CC test/nvme/reset/reset.o 00:04:46.063 CC test/nvme/sgl/sgl.o 00:04:46.063 CXX test/cpp_headers/gpt_spec.o 00:04:46.321 CC test/env/pci/pci_ut.o 00:04:46.321 LINK spdk_nvme_perf 00:04:46.321 CXX test/cpp_headers/hexlify.o 00:04:46.321 LINK reset 00:04:46.579 CC test/nvme/e2edp/nvme_dp.o 00:04:46.579 CXX test/cpp_headers/histogram_data.o 00:04:46.579 LINK iscsi_fuzz 00:04:46.579 LINK sgl 00:04:46.579 LINK accel_perf 00:04:46.838 CC app/spdk_top/spdk_top.o 00:04:46.838 LINK spdk_nvme_identify 00:04:46.838 CXX test/cpp_headers/idxd.o 00:04:46.838 CXX test/cpp_headers/idxd_spec.o 00:04:46.838 LINK nvme_dp 00:04:46.838 CXX test/cpp_headers/init.o 00:04:46.838 LINK pci_ut 00:04:46.838 CC examples/blob/hello_world/hello_blob.o 00:04:47.096 CXX test/cpp_headers/ioat.o 00:04:47.096 CXX test/cpp_headers/ioat_spec.o 00:04:47.096 CXX test/cpp_headers/iscsi_spec.o 00:04:47.096 CC examples/blob/cli/blobcli.o 00:04:47.096 CC test/nvme/overhead/overhead.o 00:04:47.096 CXX test/cpp_headers/json.o 00:04:47.096 CXX test/cpp_headers/jsonrpc.o 00:04:47.096 LINK hello_blob 00:04:47.353 CC app/vhost/vhost.o 00:04:47.353 CC test/nvme/err_injection/err_injection.o 00:04:47.353 CC test/nvme/startup/startup.o 00:04:47.353 CXX test/cpp_headers/keyring.o 00:04:47.353 LINK vhost 00:04:47.353 LINK overhead 00:04:47.611 CC test/nvme/reserve/reserve.o 00:04:47.611 LINK err_injection 00:04:47.611 LINK startup 00:04:47.611 CXX test/cpp_headers/keyring_module.o 00:04:47.611 CC examples/nvme/hello_world/hello_world.o 00:04:47.869 CC examples/nvme/reconnect/reconnect.o 00:04:47.869 LINK reserve 00:04:47.869 CXX test/cpp_headers/likely.o 00:04:47.869 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:47.869 CC test/nvme/simple_copy/simple_copy.o 00:04:47.869 LINK hello_world 00:04:47.869 LINK spdk_top 00:04:47.869 LINK blobcli 00:04:47.869 CXX test/cpp_headers/log.o 00:04:48.126 CC examples/bdev/hello_world/hello_bdev.o 00:04:48.126 CC test/nvme/connect_stress/connect_stress.o 00:04:48.126 LINK simple_copy 00:04:48.126 CXX test/cpp_headers/lvol.o 00:04:48.126 LINK reconnect 00:04:48.126 CC test/nvme/boot_partition/boot_partition.o 00:04:48.126 CC app/spdk_dd/spdk_dd.o 00:04:48.384 CC test/nvme/compliance/nvme_compliance.o 00:04:48.384 LINK connect_stress 00:04:48.384 LINK hello_bdev 00:04:48.384 CXX test/cpp_headers/memory.o 00:04:48.384 LINK boot_partition 00:04:48.384 CC examples/bdev/bdevperf/bdevperf.o 00:04:48.384 LINK nvme_manage 00:04:48.384 CC examples/nvme/arbitration/arbitration.o 00:04:48.642 CXX test/cpp_headers/mmio.o 00:04:48.642 CC test/nvme/fused_ordering/fused_ordering.o 00:04:48.642 LINK spdk_dd 00:04:48.642 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:48.642 LINK nvme_compliance 00:04:48.642 CC examples/nvme/hotplug/hotplug.o 00:04:48.642 CXX test/cpp_headers/nbd.o 00:04:48.899 CC app/fio/nvme/fio_plugin.o 00:04:48.899 CXX test/cpp_headers/notify.o 00:04:48.899 LINK fused_ordering 00:04:48.899 CXX test/cpp_headers/nvme.o 00:04:48.899 CXX test/cpp_headers/nvme_intel.o 00:04:48.899 LINK arbitration 00:04:48.899 LINK doorbell_aers 00:04:49.156 LINK hotplug 00:04:49.156 CXX test/cpp_headers/nvme_ocssd.o 00:04:49.156 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:49.156 CC examples/nvme/abort/abort.o 00:04:49.156 CC test/nvme/fdp/fdp.o 00:04:49.156 CC test/nvme/cuse/cuse.o 00:04:49.156 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:49.413 LINK cmb_copy 00:04:49.413 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:49.413 CC app/fio/bdev/fio_plugin.o 00:04:49.413 LINK pmr_persistence 00:04:49.413 LINK bdevperf 00:04:49.413 CXX test/cpp_headers/nvme_spec.o 00:04:49.414 CXX test/cpp_headers/nvme_zns.o 00:04:49.414 LINK spdk_nvme 00:04:49.693 CXX test/cpp_headers/nvmf_cmd.o 00:04:49.693 LINK abort 00:04:49.693 LINK fdp 00:04:49.693 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:49.693 CXX test/cpp_headers/nvmf.o 00:04:49.693 CXX test/cpp_headers/nvmf_spec.o 00:04:49.693 CXX test/cpp_headers/nvmf_transport.o 00:04:49.693 CXX test/cpp_headers/opal.o 00:04:49.693 CXX test/cpp_headers/opal_spec.o 00:04:49.951 CXX test/cpp_headers/pci_ids.o 00:04:49.951 CXX test/cpp_headers/pipe.o 00:04:49.951 CXX test/cpp_headers/queue.o 00:04:49.951 LINK spdk_bdev 00:04:49.951 CXX test/cpp_headers/reduce.o 00:04:49.951 CXX test/cpp_headers/rpc.o 00:04:49.951 CXX test/cpp_headers/scheduler.o 00:04:49.951 CXX test/cpp_headers/scsi.o 00:04:49.951 CC examples/nvmf/nvmf/nvmf.o 00:04:49.951 CXX test/cpp_headers/scsi_spec.o 00:04:49.951 CXX test/cpp_headers/sock.o 00:04:50.209 CXX test/cpp_headers/stdinc.o 00:04:50.209 CXX test/cpp_headers/string.o 00:04:50.209 CXX test/cpp_headers/thread.o 00:04:50.209 CXX test/cpp_headers/trace.o 00:04:50.209 CXX test/cpp_headers/trace_parser.o 00:04:50.209 CXX test/cpp_headers/tree.o 00:04:50.209 CXX test/cpp_headers/ublk.o 00:04:50.209 CXX test/cpp_headers/util.o 00:04:50.209 CXX test/cpp_headers/uuid.o 00:04:50.485 CXX test/cpp_headers/version.o 00:04:50.485 LINK nvmf 00:04:50.485 CXX test/cpp_headers/vfio_user_pci.o 00:04:50.485 CXX test/cpp_headers/vfio_user_spec.o 00:04:50.485 CXX test/cpp_headers/vhost.o 00:04:50.485 CXX test/cpp_headers/vmd.o 00:04:50.485 CXX test/cpp_headers/xor.o 00:04:50.485 CXX test/cpp_headers/zipf.o 00:04:50.746 LINK cuse 00:04:52.649 LINK esnap 00:04:52.907 00:04:52.907 real 1m5.514s 00:04:52.907 user 6m8.460s 00:04:52.907 sys 1m15.601s 00:04:52.907 20:21:46 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:52.907 20:21:46 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.907 ************************************ 00:04:52.907 END TEST make 00:04:52.907 ************************************ 00:04:52.907 20:21:46 -- common/autotest_common.sh@1142 -- $ return 0 00:04:52.907 20:21:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.907 20:21:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.907 20:21:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.907 20:21:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.907 20:21:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.907 20:21:46 -- pm/common@44 -- $ pid=5924 00:04:52.907 20:21:46 -- pm/common@50 -- $ kill -TERM 5924 00:04:52.907 20:21:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.907 20:21:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.907 20:21:46 -- pm/common@44 -- $ pid=5925 00:04:52.907 20:21:46 -- pm/common@50 -- $ kill -TERM 5925 00:04:52.907 20:21:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.907 20:21:46 -- nvmf/common.sh@7 -- # uname -s 00:04:52.907 20:21:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.907 20:21:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.907 20:21:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.907 20:21:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.907 20:21:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.907 20:21:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.907 20:21:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.907 20:21:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.907 20:21:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.907 20:21:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.907 20:21:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fe19b54-c671-479d-b03d-8ff2c5be0c37 00:04:52.907 20:21:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=2fe19b54-c671-479d-b03d-8ff2c5be0c37 00:04:52.907 20:21:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.907 20:21:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.907 20:21:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.907 20:21:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.907 20:21:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.907 20:21:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.907 20:21:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.907 20:21:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.907 20:21:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.907 20:21:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.907 20:21:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.907 20:21:46 -- paths/export.sh@5 -- # export PATH 00:04:52.907 20:21:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.907 20:21:46 -- nvmf/common.sh@47 -- # : 0 00:04:52.907 20:21:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:52.907 20:21:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:52.907 20:21:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.907 20:21:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.907 20:21:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.907 20:21:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:52.907 20:21:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:52.907 20:21:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:52.907 20:21:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.907 20:21:46 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.907 20:21:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.908 20:21:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.908 20:21:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.908 20:21:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.908 20:21:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.908 20:21:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.908 20:21:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.908 20:21:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.908 20:21:47 -- spdk/autotest.sh@48 -- # udevadm_pid=67258 00:04:52.908 20:21:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.908 20:21:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.908 20:21:47 -- pm/common@17 -- # local monitor 00:04:52.908 20:21:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.908 20:21:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.908 20:21:47 -- pm/common@25 -- # sleep 1 00:04:52.908 20:21:47 -- pm/common@21 -- # date +%s 00:04:52.908 20:21:47 -- pm/common@21 -- # date +%s 00:04:52.908 20:21:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720815707 00:04:52.908 20:21:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720815707 00:04:53.166 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720815707_collect-cpu-load.pm.log 00:04:53.166 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720815707_collect-vmstat.pm.log 00:04:54.101 20:21:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:54.101 20:21:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:54.101 20:21:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.101 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.101 20:21:48 -- spdk/autotest.sh@59 -- # create_test_list 00:04:54.101 20:21:48 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:54.101 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.101 20:21:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:54.101 20:21:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:54.101 20:21:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:54.101 20:21:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:54.101 20:21:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:54.101 20:21:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:54.101 20:21:48 -- common/autotest_common.sh@1455 -- # uname 00:04:54.101 20:21:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:54.101 20:21:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:54.101 20:21:48 -- common/autotest_common.sh@1475 -- # uname 00:04:54.101 20:21:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:54.101 20:21:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:54.101 20:21:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:54.101 20:21:48 -- spdk/autotest.sh@72 -- # hash lcov 00:04:54.101 20:21:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:54.101 20:21:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:54.101 --rc lcov_branch_coverage=1 00:04:54.101 --rc lcov_function_coverage=1 00:04:54.101 --rc genhtml_branch_coverage=1 00:04:54.101 --rc genhtml_function_coverage=1 00:04:54.101 --rc genhtml_legend=1 00:04:54.101 --rc geninfo_all_blocks=1 00:04:54.101 ' 00:04:54.101 20:21:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:54.101 --rc lcov_branch_coverage=1 00:04:54.101 --rc lcov_function_coverage=1 00:04:54.101 --rc genhtml_branch_coverage=1 00:04:54.101 --rc genhtml_function_coverage=1 00:04:54.101 --rc genhtml_legend=1 00:04:54.101 --rc geninfo_all_blocks=1 00:04:54.101 ' 00:04:54.101 20:21:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:54.101 --rc lcov_branch_coverage=1 00:04:54.101 --rc lcov_function_coverage=1 00:04:54.101 --rc genhtml_branch_coverage=1 00:04:54.101 --rc genhtml_function_coverage=1 00:04:54.101 --rc genhtml_legend=1 00:04:54.101 --rc geninfo_all_blocks=1 00:04:54.101 --no-external' 00:04:54.101 20:21:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:54.101 --rc lcov_branch_coverage=1 00:04:54.101 --rc lcov_function_coverage=1 00:04:54.101 --rc genhtml_branch_coverage=1 00:04:54.101 --rc genhtml_function_coverage=1 00:04:54.101 --rc genhtml_legend=1 00:04:54.101 --rc geninfo_all_blocks=1 00:04:54.101 --no-external' 00:04:54.101 20:21:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:54.101 lcov: LCOV version 1.14 00:04:54.101 20:21:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:12.182 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:12.182 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:22.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:22.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:22.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:22.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:22.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:22.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:22.424 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:22.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:22.682 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:22.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:26.871 20:22:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:26.871 20:22:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.871 20:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:26.871 20:22:20 -- spdk/autotest.sh@91 -- # rm -f 00:05:26.871 20:22:20 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.437 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:27.437 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:27.437 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:27.437 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:27.437 20:22:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:27.437 20:22:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:27.437 20:22:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:27.437 20:22:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:27.437 20:22:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.437 20:22:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.437 20:22:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.437 20:22:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.437 20:22:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:27.437 20:22:21 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:27.437 20:22:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.437 20:22:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:27.437 20:22:21 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:27.437 20:22:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.437 20:22:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.437 20:22:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:27.437 20:22:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:27.437 20:22:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.438 20:22:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:27.438 20:22:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.438 20:22:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.438 20:22:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:27.438 20:22:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:27.438 20:22:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:27.438 No valid GPT data, bailing 00:05:27.438 20:22:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:27.438 20:22:21 -- scripts/common.sh@391 -- # pt= 00:05:27.438 20:22:21 -- scripts/common.sh@392 -- # return 1 00:05:27.438 20:22:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:27.438 1+0 records in 00:05:27.438 1+0 records out 00:05:27.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158062 s, 66.3 MB/s 00:05:27.438 20:22:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.438 20:22:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.438 20:22:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:27.438 20:22:21 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:27.438 20:22:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:27.438 No valid GPT data, bailing 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # pt= 00:05:27.697 20:22:21 -- scripts/common.sh@392 -- # return 1 00:05:27.697 20:22:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:27.697 1+0 records in 00:05:27.697 1+0 records out 00:05:27.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461839 s, 227 MB/s 00:05:27.697 20:22:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.697 20:22:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.697 20:22:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:27.697 20:22:21 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:27.697 20:22:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:27.697 No valid GPT data, bailing 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # pt= 00:05:27.697 20:22:21 -- scripts/common.sh@392 -- # return 1 00:05:27.697 20:22:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:27.697 1+0 records in 00:05:27.697 1+0 records out 00:05:27.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496094 s, 211 MB/s 00:05:27.697 20:22:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.697 20:22:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.697 20:22:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:27.697 20:22:21 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:27.697 20:22:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:27.697 No valid GPT data, bailing 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # pt= 00:05:27.697 20:22:21 -- scripts/common.sh@392 -- # return 1 00:05:27.697 20:22:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:27.697 1+0 records in 00:05:27.697 1+0 records out 00:05:27.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473163 s, 222 MB/s 00:05:27.697 20:22:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.697 20:22:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.697 20:22:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:27.697 20:22:21 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:27.697 20:22:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:27.697 No valid GPT data, bailing 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:27.697 20:22:21 -- scripts/common.sh@391 -- # pt= 00:05:27.697 20:22:21 -- scripts/common.sh@392 -- # return 1 00:05:27.697 20:22:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:27.956 1+0 records in 00:05:27.956 1+0 records out 00:05:27.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442221 s, 237 MB/s 00:05:27.956 20:22:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.956 20:22:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.956 20:22:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:27.956 20:22:21 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:27.956 20:22:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:27.956 No valid GPT data, bailing 00:05:27.956 20:22:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:27.956 20:22:21 -- scripts/common.sh@391 -- # pt= 00:05:27.956 20:22:21 -- scripts/common.sh@392 -- # return 1 00:05:27.956 20:22:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:27.956 1+0 records in 00:05:27.956 1+0 records out 00:05:27.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518298 s, 202 MB/s 00:05:27.956 20:22:21 -- spdk/autotest.sh@118 -- # sync 00:05:27.956 20:22:22 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:27.956 20:22:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:27.956 20:22:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:29.856 20:22:23 -- spdk/autotest.sh@124 -- # uname -s 00:05:29.856 20:22:23 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:29.856 20:22:23 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:29.856 20:22:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.856 20:22:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.856 20:22:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.856 ************************************ 00:05:29.856 START TEST setup.sh 00:05:29.856 ************************************ 00:05:29.856 20:22:23 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:29.856 * Looking for test storage... 00:05:29.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.856 20:22:23 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:29.856 20:22:23 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:29.856 20:22:23 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:29.856 20:22:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.856 20:22:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.856 20:22:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.856 ************************************ 00:05:29.856 START TEST acl 00:05:29.856 ************************************ 00:05:29.856 20:22:23 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:29.856 * Looking for test storage... 00:05:29.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.856 20:22:24 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:30.127 20:22:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.127 20:22:24 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:30.127 20:22:24 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:30.127 20:22:24 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:30.127 20:22:24 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:30.127 20:22:24 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:30.127 20:22:24 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.127 20:22:24 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.061 20:22:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:31.061 20:22:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:31.061 20:22:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.061 20:22:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:31.061 20:22:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.061 20:22:25 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.628 20:22:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:31.628 20:22:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:31.628 20:22:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.193 Hugepages 00:05:32.193 node hugesize free / total 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.193 00:05:32.193 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.193 20:22:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.194 20:22:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:32.452 20:22:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:32.452 20:22:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.452 20:22:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.452 20:22:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:32.452 ************************************ 00:05:32.452 START TEST denied 00:05:32.452 ************************************ 00:05:32.452 20:22:26 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:32.452 20:22:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:32.452 20:22:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:32.452 20:22:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:32.452 20:22:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.452 20:22:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.828 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.828 20:22:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.449 00:05:40.449 real 0m7.062s 00:05:40.449 user 0m0.797s 00:05:40.449 sys 0m1.294s 00:05:40.449 20:22:33 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.449 ************************************ 00:05:40.449 END TEST denied 00:05:40.449 20:22:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:40.449 ************************************ 00:05:40.449 20:22:33 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:40.449 20:22:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:40.449 20:22:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.449 20:22:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.449 20:22:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:40.449 ************************************ 00:05:40.449 START TEST allowed 00:05:40.449 ************************************ 00:05:40.449 20:22:33 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:40.449 20:22:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:40.449 20:22:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:40.449 20:22:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:40.449 20:22:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.449 20:22:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.707 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.707 20:22:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.638 00:05:41.639 real 0m2.214s 00:05:41.639 user 0m1.029s 00:05:41.639 sys 0m1.182s 00:05:41.639 20:22:35 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.639 20:22:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:41.639 ************************************ 00:05:41.639 END TEST allowed 00:05:41.639 ************************************ 00:05:41.897 20:22:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:41.897 00:05:41.897 real 0m11.888s 00:05:41.897 user 0m3.070s 00:05:41.897 sys 0m3.861s 00:05:41.897 20:22:35 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.898 20:22:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:41.898 ************************************ 00:05:41.898 END TEST acl 00:05:41.898 ************************************ 00:05:41.898 20:22:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:41.898 20:22:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:41.898 20:22:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.898 20:22:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.898 20:22:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.898 ************************************ 00:05:41.898 START TEST hugepages 00:05:41.898 ************************************ 00:05:41.898 20:22:35 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:41.898 * Looking for test storage... 00:05:41.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4406576 kB' 'MemAvailable: 7371832 kB' 'Buffers: 2436 kB' 'Cached: 3167888 kB' 'SwapCached: 0 kB' 'Active: 444300 kB' 'Inactive: 2827768 kB' 'Active(anon): 112260 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 103596 kB' 'Mapped: 48788 kB' 'Shmem: 10516 kB' 'KReclaimable: 84772 kB' 'Slab: 164512 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 79740 kB' 'KernelStack: 6588 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.898 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.899 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:41.900 20:22:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:41.900 20:22:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.900 20:22:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.900 20:22:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:41.900 ************************************ 00:05:41.900 START TEST default_setup 00:05:41.900 ************************************ 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:41.900 20:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.900 20:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.030 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.030 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.030 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6526068 kB' 'MemAvailable: 9491076 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 462248 kB' 'Inactive: 2827784 kB' 'Active(anon): 130208 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121080 kB' 'Mapped: 48904 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163932 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79684 kB' 'KernelStack: 6576 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.290 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.291 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6526068 kB' 'MemAvailable: 9491076 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 461908 kB' 'Inactive: 2827784 kB' 'Active(anon): 129868 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120956 kB' 'Mapped: 48904 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163920 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79672 kB' 'KernelStack: 6544 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.292 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.293 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6526068 kB' 'MemAvailable: 9491076 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 461860 kB' 'Inactive: 2827784 kB' 'Active(anon): 129820 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120912 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163916 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 6512 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.294 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.295 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:43.296 nr_hugepages=1024 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.296 resv_hugepages=0 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.296 surplus_hugepages=0 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.296 anon_hugepages=0 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.296 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6526068 kB' 'MemAvailable: 9491076 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 461728 kB' 'Inactive: 2827784 kB' 'Active(anon): 129688 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120780 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163916 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 6512 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.297 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.298 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6526068 kB' 'MemUsed: 5715912 kB' 'SwapCached: 0 kB' 'Active: 461772 kB' 'Inactive: 2827784 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 3170308 kB' 'Mapped: 48712 kB' 'AnonPages: 120844 kB' 'Shmem: 10476 kB' 'KernelStack: 6528 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84248 kB' 'Slab: 163916 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.299 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.300 node0=1024 expecting 1024 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.300 20:22:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.300 00:05:43.300 real 0m1.354s 00:05:43.300 user 0m0.604s 00:05:43.301 sys 0m0.741s 00:05:43.301 20:22:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.301 20:22:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:43.301 ************************************ 00:05:43.301 END TEST default_setup 00:05:43.301 ************************************ 00:05:43.301 20:22:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:43.301 20:22:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:43.301 20:22:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.301 20:22:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.301 20:22:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:43.301 ************************************ 00:05:43.301 START TEST per_node_1G_alloc 00:05:43.301 ************************************ 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.301 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.867 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.867 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.867 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.867 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.867 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575444 kB' 'MemAvailable: 10540464 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 462652 kB' 'Inactive: 2827796 kB' 'Active(anon): 130612 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121708 kB' 'Mapped: 48844 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163912 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79664 kB' 'KernelStack: 6632 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.868 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.869 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575696 kB' 'MemAvailable: 10540716 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461808 kB' 'Inactive: 2827796 kB' 'Active(anon): 129768 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120916 kB' 'Mapped: 48900 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163932 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79684 kB' 'KernelStack: 6536 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.870 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.871 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575696 kB' 'MemAvailable: 10540716 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461716 kB' 'Inactive: 2827796 kB' 'Active(anon): 129676 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120772 kB' 'Mapped: 48900 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163932 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79684 kB' 'KernelStack: 6504 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.872 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:43.873 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.874 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.874 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:44.134 nr_hugepages=512 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:44.134 resv_hugepages=0 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.134 surplus_hugepages=0 00:05:44.134 anon_hugepages=0 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.134 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575948 kB' 'MemAvailable: 10540968 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461704 kB' 'Inactive: 2827796 kB' 'Active(anon): 129664 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120884 kB' 'Mapped: 49016 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163928 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79680 kB' 'KernelStack: 6568 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.135 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.136 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575948 kB' 'MemUsed: 4666032 kB' 'SwapCached: 0 kB' 'Active: 461752 kB' 'Inactive: 2827796 kB' 'Active(anon): 129712 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 3170312 kB' 'Mapped: 49016 kB' 'AnonPages: 120912 kB' 'Shmem: 10476 kB' 'KernelStack: 6536 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84248 kB' 'Slab: 163928 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.137 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.138 node0=512 expecting 512 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:44.138 00:05:44.138 real 0m0.673s 00:05:44.138 user 0m0.329s 00:05:44.138 sys 0m0.391s 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.138 20:22:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:44.138 ************************************ 00:05:44.138 END TEST per_node_1G_alloc 00:05:44.138 ************************************ 00:05:44.138 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:44.138 20:22:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:44.138 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.138 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.138 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:44.138 ************************************ 00:05:44.138 START TEST even_2G_alloc 00:05:44.138 ************************************ 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.138 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.659 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.659 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.659 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.659 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6527564 kB' 'MemAvailable: 9492584 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461916 kB' 'Inactive: 2827796 kB' 'Active(anon): 129876 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120984 kB' 'Mapped: 48904 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163900 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79652 kB' 'KernelStack: 6520 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.659 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6527564 kB' 'MemAvailable: 9492584 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461556 kB' 'Inactive: 2827796 kB' 'Active(anon): 129516 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120908 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163908 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 6560 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.660 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.661 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6527564 kB' 'MemAvailable: 9492584 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461760 kB' 'Inactive: 2827796 kB' 'Active(anon): 129720 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120892 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163900 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79652 kB' 'KernelStack: 6560 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.662 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.663 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:44.664 nr_hugepages=1024 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.664 resv_hugepages=0 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.664 surplus_hugepages=0 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.664 anon_hugepages=0 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6528180 kB' 'MemAvailable: 9493200 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461544 kB' 'Inactive: 2827796 kB' 'Active(anon): 129504 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120644 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163900 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79652 kB' 'KernelStack: 6560 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.664 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.665 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6528180 kB' 'MemUsed: 5713800 kB' 'SwapCached: 0 kB' 'Active: 461776 kB' 'Inactive: 2827796 kB' 'Active(anon): 129736 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 3170312 kB' 'Mapped: 48772 kB' 'AnonPages: 120880 kB' 'Shmem: 10476 kB' 'KernelStack: 6560 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84248 kB' 'Slab: 163900 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.666 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.667 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.965 node0=1024 expecting 1024 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.965 00:05:44.965 real 0m0.678s 00:05:44.965 user 0m0.331s 00:05:44.965 sys 0m0.390s 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.965 20:22:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:44.965 ************************************ 00:05:44.965 END TEST even_2G_alloc 00:05:44.965 ************************************ 00:05:44.965 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:44.965 20:22:38 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:44.965 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.965 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.965 20:22:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:44.965 ************************************ 00:05:44.965 START TEST odd_alloc 00:05:44.965 ************************************ 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.965 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.966 20:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.223 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.223 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.223 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.223 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6523612 kB' 'MemAvailable: 9488628 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 462076 kB' 'Inactive: 2827792 kB' 'Active(anon): 130036 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 121152 kB' 'Mapped: 49060 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163828 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79580 kB' 'KernelStack: 6592 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.486 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6524024 kB' 'MemAvailable: 9489040 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 461796 kB' 'Inactive: 2827792 kB' 'Active(anon): 129756 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120920 kB' 'Mapped: 49024 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163800 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79552 kB' 'KernelStack: 6592 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.487 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.488 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6524024 kB' 'MemAvailable: 9489040 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 461752 kB' 'Inactive: 2827792 kB' 'Active(anon): 129712 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120820 kB' 'Mapped: 49024 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163800 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79552 kB' 'KernelStack: 6576 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.489 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.490 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:45.491 nr_hugepages=1025 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:45.491 resv_hugepages=0 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.491 surplus_hugepages=0 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.491 anon_hugepages=0 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6524024 kB' 'MemAvailable: 9489040 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 461772 kB' 'Inactive: 2827792 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 120884 kB' 'Mapped: 49024 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163800 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79552 kB' 'KernelStack: 6576 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.491 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.492 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6524024 kB' 'MemUsed: 5717956 kB' 'SwapCached: 0 kB' 'Active: 461600 kB' 'Inactive: 2827792 kB' 'Active(anon): 129560 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 3170308 kB' 'Mapped: 49024 kB' 'AnonPages: 120672 kB' 'Shmem: 10476 kB' 'KernelStack: 6592 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84248 kB' 'Slab: 163800 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.493 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.494 node0=1025 expecting 1025 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:45.494 00:05:45.494 real 0m0.669s 00:05:45.494 user 0m0.322s 00:05:45.494 sys 0m0.396s 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.494 20:22:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:45.494 ************************************ 00:05:45.494 END TEST odd_alloc 00:05:45.494 ************************************ 00:05:45.494 20:22:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:45.494 20:22:39 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:45.494 20:22:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.494 20:22:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.494 20:22:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.494 ************************************ 00:05:45.494 START TEST custom_alloc 00:05:45.494 ************************************ 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:45.494 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:45.495 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:45.495 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:45.495 20:22:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:45.495 20:22:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.495 20:22:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.064 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.064 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.064 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.064 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575600 kB' 'MemAvailable: 10540616 kB' 'Buffers: 2436 kB' 'Cached: 3167872 kB' 'SwapCached: 0 kB' 'Active: 462272 kB' 'Inactive: 2827792 kB' 'Active(anon): 130232 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121328 kB' 'Mapped: 49008 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163788 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79540 kB' 'KernelStack: 6640 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.064 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.065 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575600 kB' 'MemAvailable: 10540620 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461732 kB' 'Inactive: 2827796 kB' 'Active(anon): 129692 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121064 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163788 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79540 kB' 'KernelStack: 6544 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.066 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.067 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.068 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575600 kB' 'MemAvailable: 10540620 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461748 kB' 'Inactive: 2827796 kB' 'Active(anon): 129708 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120820 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163780 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79532 kB' 'KernelStack: 6544 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.069 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.070 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:46.071 nr_hugepages=512 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:46.071 resv_hugepages=0 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.071 surplus_hugepages=0 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.071 anon_hugepages=0 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.071 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575864 kB' 'MemAvailable: 10540884 kB' 'Buffers: 2436 kB' 'Cached: 3167876 kB' 'SwapCached: 0 kB' 'Active: 461728 kB' 'Inactive: 2827796 kB' 'Active(anon): 129688 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120800 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163780 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79532 kB' 'KernelStack: 6528 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.072 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.332 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575808 kB' 'MemUsed: 4666172 kB' 'SwapCached: 0 kB' 'Active: 461756 kB' 'Inactive: 2827796 kB' 'Active(anon): 129716 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827796 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3170312 kB' 'Mapped: 48776 kB' 'AnonPages: 120820 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84248 kB' 'Slab: 163776 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.333 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.334 node0=512 expecting 512 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:46.334 00:05:46.334 real 0m0.690s 00:05:46.334 user 0m0.304s 00:05:46.334 sys 0m0.405s 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.334 20:22:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:46.335 ************************************ 00:05:46.335 END TEST custom_alloc 00:05:46.335 ************************************ 00:05:46.335 20:22:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:46.335 20:22:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:46.335 20:22:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.335 20:22:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.335 20:22:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:46.335 ************************************ 00:05:46.335 START TEST no_shrink_alloc 00:05:46.335 ************************************ 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.335 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.855 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.855 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.855 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.855 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531060 kB' 'MemAvailable: 9496084 kB' 'Buffers: 2436 kB' 'Cached: 3167880 kB' 'SwapCached: 0 kB' 'Active: 462560 kB' 'Inactive: 2827800 kB' 'Active(anon): 130520 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827800 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121432 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163804 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79556 kB' 'KernelStack: 6628 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.855 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.856 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531060 kB' 'MemAvailable: 9496084 kB' 'Buffers: 2436 kB' 'Cached: 3167880 kB' 'SwapCached: 0 kB' 'Active: 461720 kB' 'Inactive: 2827800 kB' 'Active(anon): 129680 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827800 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 120756 kB' 'Mapped: 48948 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163828 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79580 kB' 'KernelStack: 6576 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.857 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.858 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531060 kB' 'MemAvailable: 9496084 kB' 'Buffers: 2436 kB' 'Cached: 3167880 kB' 'SwapCached: 0 kB' 'Active: 459568 kB' 'Inactive: 2827800 kB' 'Active(anon): 127528 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827800 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118864 kB' 'Mapped: 48332 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163836 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79588 kB' 'KernelStack: 6576 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.859 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.860 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:46.861 nr_hugepages=1024 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.861 resv_hugepages=0 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.861 surplus_hugepages=0 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.861 anon_hugepages=0 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531060 kB' 'MemAvailable: 9496084 kB' 'Buffers: 2436 kB' 'Cached: 3167880 kB' 'SwapCached: 0 kB' 'Active: 459140 kB' 'Inactive: 2827800 kB' 'Active(anon): 127100 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827800 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118232 kB' 'Mapped: 48112 kB' 'Shmem: 10476 kB' 'KReclaimable: 84248 kB' 'Slab: 163804 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79556 kB' 'KernelStack: 6528 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.861 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.862 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531408 kB' 'MemUsed: 5710572 kB' 'SwapCached: 0 kB' 'Active: 459056 kB' 'Inactive: 2827800 kB' 'Active(anon): 127016 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827800 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 3170316 kB' 'Mapped: 48112 kB' 'AnonPages: 118096 kB' 'Shmem: 10476 kB' 'KernelStack: 6496 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84248 kB' 'Slab: 163800 kB' 'SReclaimable: 84248 kB' 'SUnreclaim: 79552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.863 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.864 node0=1024 expecting 1024 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.864 20:22:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.433 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.433 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.433 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.433 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.433 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531284 kB' 'MemAvailable: 9496308 kB' 'Buffers: 2436 kB' 'Cached: 3167884 kB' 'SwapCached: 0 kB' 'Active: 460184 kB' 'Inactive: 2827804 kB' 'Active(anon): 128144 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827804 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119024 kB' 'Mapped: 48360 kB' 'Shmem: 10476 kB' 'KReclaimable: 84240 kB' 'Slab: 163672 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 79432 kB' 'KernelStack: 6640 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.433 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.434 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531536 kB' 'MemAvailable: 9496560 kB' 'Buffers: 2436 kB' 'Cached: 3167884 kB' 'SwapCached: 0 kB' 'Active: 459328 kB' 'Inactive: 2827804 kB' 'Active(anon): 127288 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827804 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118632 kB' 'Mapped: 48188 kB' 'Shmem: 10476 kB' 'KReclaimable: 84240 kB' 'Slab: 163672 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 79432 kB' 'KernelStack: 6528 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.435 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.436 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:47.437 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531536 kB' 'MemAvailable: 9496560 kB' 'Buffers: 2436 kB' 'Cached: 3167884 kB' 'SwapCached: 0 kB' 'Active: 459100 kB' 'Inactive: 2827804 kB' 'Active(anon): 127060 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827804 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118156 kB' 'Mapped: 48036 kB' 'Shmem: 10476 kB' 'KReclaimable: 84240 kB' 'Slab: 163672 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 79432 kB' 'KernelStack: 6464 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.438 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.439 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.440 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:47.701 nr_hugepages=1024 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:47.701 resv_hugepages=0 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.701 surplus_hugepages=0 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.701 anon_hugepages=0 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531536 kB' 'MemAvailable: 9496560 kB' 'Buffers: 2436 kB' 'Cached: 3167884 kB' 'SwapCached: 0 kB' 'Active: 459316 kB' 'Inactive: 2827804 kB' 'Active(anon): 127276 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827804 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118372 kB' 'Mapped: 48036 kB' 'Shmem: 10476 kB' 'KReclaimable: 84240 kB' 'Slab: 163672 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 79432 kB' 'KernelStack: 6448 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.701 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.702 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6531788 kB' 'MemUsed: 5710192 kB' 'SwapCached: 0 kB' 'Active: 459052 kB' 'Inactive: 2827804 kB' 'Active(anon): 127012 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2827804 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3170320 kB' 'Mapped: 48036 kB' 'AnonPages: 118424 kB' 'Shmem: 10476 kB' 'KernelStack: 6496 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84240 kB' 'Slab: 163672 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 79432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.703 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.704 node0=1024 expecting 1024 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:47.704 00:05:47.704 real 0m1.339s 00:05:47.704 user 0m0.634s 00:05:47.704 sys 0m0.804s 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.704 ************************************ 00:05:47.704 20:22:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:47.704 END TEST no_shrink_alloc 00:05:47.704 ************************************ 00:05:47.704 20:22:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:47.704 20:22:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:47.704 00:05:47.704 real 0m5.845s 00:05:47.704 user 0m2.697s 00:05:47.704 sys 0m3.379s 00:05:47.704 ************************************ 00:05:47.704 END TEST hugepages 00:05:47.704 ************************************ 00:05:47.704 20:22:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.704 20:22:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:47.704 20:22:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:47.705 20:22:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:47.705 20:22:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.705 20:22:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.705 20:22:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:47.705 ************************************ 00:05:47.705 START TEST driver 00:05:47.705 ************************************ 00:05:47.705 20:22:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:47.705 * Looking for test storage... 00:05:47.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:47.705 20:22:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:47.705 20:22:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:47.705 20:22:41 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.278 20:22:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:54.278 20:22:47 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.278 20:22:47 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.278 20:22:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:54.278 ************************************ 00:05:54.278 START TEST guess_driver 00:05:54.278 ************************************ 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:54.278 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:54.278 Looking for driver=uio_pci_generic 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.278 20:22:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.278 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:54.278 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:54.278 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:54.845 20:22:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:55.104 20:22:49 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:55.104 20:22:49 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:55.104 20:22:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:55.104 20:22:49 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.665 00:06:01.665 real 0m7.167s 00:06:01.665 user 0m0.764s 00:06:01.665 sys 0m1.494s 00:06:01.665 20:22:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.665 20:22:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:01.665 ************************************ 00:06:01.665 END TEST guess_driver 00:06:01.665 ************************************ 00:06:01.665 20:22:54 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:06:01.665 00:06:01.665 real 0m13.218s 00:06:01.665 user 0m1.101s 00:06:01.665 sys 0m2.321s 00:06:01.665 20:22:54 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.665 ************************************ 00:06:01.665 END TEST driver 00:06:01.665 20:22:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:01.665 ************************************ 00:06:01.665 20:22:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:01.665 20:22:55 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:01.665 20:22:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.665 20:22:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.665 20:22:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:01.665 ************************************ 00:06:01.665 START TEST devices 00:06:01.665 ************************************ 00:06:01.665 20:22:55 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:01.665 * Looking for test storage... 00:06:01.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:01.665 20:22:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:01.665 20:22:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:01.665 20:22:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:01.665 20:22:55 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:02.233 20:22:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:02.233 No valid GPT data, bailing 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:02.233 20:22:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:02.233 20:22:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:02.233 20:22:56 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:02.233 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:02.233 No valid GPT data, bailing 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.233 20:22:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:02.234 20:22:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:02.234 20:22:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:02.234 20:22:56 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:02.234 20:22:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:06:02.234 20:22:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:06:02.234 No valid GPT data, bailing 00:06:02.234 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:02.234 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.234 20:22:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:02.234 20:22:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:02.234 20:22:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:02.234 20:22:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:02.234 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:06:02.234 20:22:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:06:02.234 20:22:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:06:02.492 No valid GPT data, bailing 00:06:02.492 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:02.492 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.492 20:22:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.492 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:06:02.492 20:22:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:06:02.492 20:22:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:06:02.493 20:22:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:06:02.493 No valid GPT data, bailing 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:06:02.493 20:22:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:06:02.493 20:22:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:06:02.493 20:22:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:06:02.493 No valid GPT data, bailing 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.493 20:22:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:06:02.493 20:22:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:06:02.493 20:22:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:06:02.493 20:22:56 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:02.493 20:22:56 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:02.493 20:22:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.493 20:22:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.493 20:22:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:02.493 ************************************ 00:06:02.493 START TEST nvme_mount 00:06:02.493 ************************************ 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:02.493 20:22:56 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:03.868 Creating new GPT entries in memory. 00:06:03.868 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:03.868 other utilities. 00:06:03.868 20:22:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:03.868 20:22:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:03.868 20:22:57 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:03.868 20:22:57 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:03.868 20:22:57 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:04.803 Creating new GPT entries in memory. 00:06:04.803 The operation has completed successfully. 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 72948 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.803 20:22:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.061 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.061 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.061 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.061 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.061 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.061 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.320 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.320 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:05.579 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:05.579 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:05.837 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:05.837 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:05.837 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:05.837 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:05.837 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:05.837 20:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:05.837 20:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.837 20:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:05.837 20:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:05.837 20:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:06.095 20:22:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.095 20:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:06.095 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.095 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:06.095 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:06.095 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.095 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.095 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.355 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.355 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.355 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.355 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.355 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.355 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.612 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.612 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.871 20:23:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:07.129 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.129 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:07.129 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:07.129 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.129 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.129 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.396 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.396 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.396 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.396 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.396 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.396 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.655 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.655 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.912 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:07.912 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:07.912 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:07.912 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:07.913 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:07.913 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:07.913 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:07.913 20:23:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:07.913 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:07.913 00:06:07.913 real 0m5.339s 00:06:07.913 user 0m1.486s 00:06:07.913 sys 0m1.560s 00:06:07.913 20:23:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.913 20:23:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:07.913 ************************************ 00:06:07.913 END TEST nvme_mount 00:06:07.913 ************************************ 00:06:07.913 20:23:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:07.913 20:23:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:07.913 20:23:02 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.913 20:23:02 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.913 20:23:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:07.913 ************************************ 00:06:07.913 START TEST dm_mount 00:06:07.913 ************************************ 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:07.913 20:23:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:09.291 Creating new GPT entries in memory. 00:06:09.291 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:09.291 other utilities. 00:06:09.291 20:23:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:09.291 20:23:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:09.291 20:23:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:09.291 20:23:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:09.291 20:23:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:10.227 Creating new GPT entries in memory. 00:06:10.227 The operation has completed successfully. 00:06:10.227 20:23:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:10.227 20:23:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:10.227 20:23:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:10.227 20:23:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:10.227 20:23:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:11.162 The operation has completed successfully. 00:06:11.162 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:11.162 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.162 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 73573 00:06:11.162 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.163 20:23:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.421 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.679 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.679 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.679 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.679 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.937 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:11.937 20:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.195 20:23:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.453 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.711 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.711 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.711 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.711 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.968 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.968 20:23:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.968 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:12.968 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:12.968 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:12.968 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:12.968 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:13.229 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:13.229 00:06:13.229 real 0m5.147s 00:06:13.229 user 0m0.980s 00:06:13.229 sys 0m1.107s 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.229 20:23:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:13.229 ************************************ 00:06:13.229 END TEST dm_mount 00:06:13.229 ************************************ 00:06:13.229 20:23:07 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:13.229 20:23:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:13.229 20:23:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:13.229 20:23:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.229 20:23:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:13.229 20:23:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:13.229 20:23:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:13.229 20:23:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:13.488 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:13.488 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:13.488 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:13.488 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:13.488 20:23:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:13.488 20:23:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:13.488 20:23:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:13.488 20:23:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:13.488 20:23:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:13.488 20:23:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:13.488 20:23:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:13.488 ************************************ 00:06:13.488 END TEST devices 00:06:13.488 ************************************ 00:06:13.488 00:06:13.488 real 0m12.474s 00:06:13.488 user 0m3.345s 00:06:13.488 sys 0m3.485s 00:06:13.488 20:23:07 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.488 20:23:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:13.488 20:23:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:13.488 00:06:13.488 real 0m43.704s 00:06:13.488 user 0m10.309s 00:06:13.488 sys 0m13.220s 00:06:13.488 20:23:07 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.488 20:23:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:13.488 ************************************ 00:06:13.488 END TEST setup.sh 00:06:13.488 ************************************ 00:06:13.488 20:23:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.488 20:23:07 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.620 Hugepages 00:06:14.620 node hugesize free / total 00:06:14.620 node0 1048576kB 0 / 0 00:06:14.620 node0 2048kB 2048 / 2048 00:06:14.620 00:06:14.620 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.620 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:14.620 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:14.878 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:14.878 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:14.878 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:14.878 20:23:08 -- spdk/autotest.sh@130 -- # uname -s 00:06:14.878 20:23:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:14.878 20:23:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:14.878 20:23:08 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.008 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.008 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.008 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.272 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.272 20:23:10 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:17.216 20:23:11 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:17.216 20:23:11 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:17.216 20:23:11 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:17.216 20:23:11 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:17.216 20:23:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:17.216 20:23:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:17.216 20:23:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.216 20:23:11 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.216 20:23:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:17.216 20:23:11 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:17.216 20:23:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:17.216 20:23:11 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.784 Waiting for block devices as requested 00:06:17.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.043 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.043 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.301 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.583 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:23.583 20:23:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.583 20:23:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.583 20:23:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.583 20:23:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:23.583 20:23:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:23.583 20:23:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.583 20:23:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.583 20:23:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.583 20:23:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1557 -- # continue 00:06:23.583 20:23:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.583 20:23:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.583 20:23:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.583 20:23:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.583 20:23:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1557 -- # continue 00:06:23.583 20:23:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.583 20:23:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.583 20:23:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.583 20:23:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.583 20:23:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.583 20:23:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.583 20:23:17 -- common/autotest_common.sh@1557 -- # continue 00:06:23.583 20:23:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.583 20:23:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:06:23.583 20:23:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:23.583 20:23:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:23.584 20:23:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:23.584 20:23:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:23.584 20:23:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:23.584 20:23:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:23.584 20:23:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:23.584 20:23:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.584 20:23:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.584 20:23:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.584 20:23:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.584 20:23:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.584 20:23:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:06:23.584 20:23:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.584 20:23:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.584 20:23:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.584 20:23:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.584 20:23:17 -- common/autotest_common.sh@1557 -- # continue 00:06:23.584 20:23:17 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:23.584 20:23:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.584 20:23:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.584 20:23:17 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:23.584 20:23:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.584 20:23:17 -- common/autotest_common.sh@10 -- # set +x 00:06:23.584 20:23:17 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:23.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.410 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.410 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.410 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.669 20:23:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:24.669 20:23:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.669 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:24.669 20:23:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:24.669 20:23:18 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:24.669 20:23:18 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:24.669 20:23:18 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:24.669 20:23:18 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:24.669 20:23:18 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:24.669 20:23:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:24.669 20:23:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:24.669 20:23:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:24.669 20:23:18 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:24.669 20:23:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:24.669 20:23:18 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:24.669 20:23:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:24.669 20:23:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:24.669 20:23:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.669 20:23:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:24.669 20:23:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.669 20:23:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:24.669 20:23:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.669 20:23:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:24.669 20:23:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:24.669 20:23:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.669 20:23:18 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:24.669 20:23:18 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:24.669 20:23:18 -- common/autotest_common.sh@1593 -- # return 0 00:06:24.669 20:23:18 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:24.669 20:23:18 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:24.669 20:23:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:24.669 20:23:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:24.669 20:23:18 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:24.669 20:23:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.669 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:24.669 20:23:18 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:24.669 20:23:18 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:24.669 20:23:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.669 20:23:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.669 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:24.669 ************************************ 00:06:24.669 START TEST env 00:06:24.669 ************************************ 00:06:24.669 20:23:18 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:24.928 * Looking for test storage... 00:06:24.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:24.928 20:23:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:24.928 20:23:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.928 20:23:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.928 20:23:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.928 ************************************ 00:06:24.928 START TEST env_memory 00:06:24.928 ************************************ 00:06:24.928 20:23:18 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:24.928 00:06:24.928 00:06:24.928 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.928 http://cunit.sourceforge.net/ 00:06:24.928 00:06:24.928 00:06:24.928 Suite: memory 00:06:24.928 Test: alloc and free memory map ...[2024-07-12 20:23:18.946391] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:24.928 passed 00:06:24.928 Test: mem map translation ...[2024-07-12 20:23:18.996551] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:24.928 [2024-07-12 20:23:18.996791] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:24.928 [2024-07-12 20:23:18.996995] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:24.928 [2024-07-12 20:23:18.997175] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:24.928 passed 00:06:25.188 Test: mem map registration ...[2024-07-12 20:23:19.077118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:25.188 [2024-07-12 20:23:19.077198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:25.188 passed 00:06:25.188 Test: mem map adjacent registrations ...passed 00:06:25.188 00:06:25.188 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.188 suites 1 1 n/a 0 0 00:06:25.188 tests 4 4 4 0 0 00:06:25.188 asserts 152 152 152 0 n/a 00:06:25.188 00:06:25.188 Elapsed time = 0.283 seconds 00:06:25.188 00:06:25.188 real 0m0.323s 00:06:25.188 user 0m0.296s 00:06:25.188 sys 0m0.021s 00:06:25.188 20:23:19 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.188 20:23:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:25.188 ************************************ 00:06:25.188 END TEST env_memory 00:06:25.188 ************************************ 00:06:25.188 20:23:19 env -- common/autotest_common.sh@1142 -- # return 0 00:06:25.188 20:23:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:25.188 20:23:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.188 20:23:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.188 20:23:19 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.188 ************************************ 00:06:25.188 START TEST env_vtophys 00:06:25.188 ************************************ 00:06:25.188 20:23:19 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:25.188 EAL: lib.eal log level changed from notice to debug 00:06:25.188 EAL: Detected lcore 0 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 1 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 2 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 3 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 4 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 5 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 6 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 7 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 8 as core 0 on socket 0 00:06:25.188 EAL: Detected lcore 9 as core 0 on socket 0 00:06:25.188 EAL: Maximum logical cores by configuration: 128 00:06:25.188 EAL: Detected CPU lcores: 10 00:06:25.188 EAL: Detected NUMA nodes: 1 00:06:25.188 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:06:25.188 EAL: Detected shared linkage of DPDK 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:06:25.188 EAL: Registered [vdev] bus. 00:06:25.188 EAL: bus.vdev log level changed from disabled to notice 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:06:25.188 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:25.188 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:06:25.188 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:06:25.448 EAL: No shared files mode enabled, IPC will be disabled 00:06:25.448 EAL: No shared files mode enabled, IPC is disabled 00:06:25.448 EAL: Selected IOVA mode 'PA' 00:06:25.448 EAL: Probing VFIO support... 00:06:25.448 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:25.448 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:25.448 EAL: Ask a virtual area of 0x2e000 bytes 00:06:25.448 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:25.448 EAL: Setting up physically contiguous memory... 00:06:25.448 EAL: Setting maximum number of open files to 524288 00:06:25.448 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:25.448 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:25.448 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.448 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:25.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.448 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.448 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:25.448 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:25.448 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.448 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:25.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.448 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.448 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:25.448 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:25.448 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.448 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:25.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.448 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.448 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:25.448 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:25.448 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.448 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:25.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.448 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.448 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:25.448 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:25.448 EAL: Hugepages will be freed exactly as allocated. 00:06:25.448 EAL: No shared files mode enabled, IPC is disabled 00:06:25.448 EAL: No shared files mode enabled, IPC is disabled 00:06:25.448 EAL: TSC frequency is ~2200000 KHz 00:06:25.448 EAL: Main lcore 0 is ready (tid=7f0b3c9f9a40;cpuset=[0]) 00:06:25.448 EAL: Trying to obtain current memory policy. 00:06:25.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.448 EAL: Restoring previous memory policy: 0 00:06:25.448 EAL: request: mp_malloc_sync 00:06:25.448 EAL: No shared files mode enabled, IPC is disabled 00:06:25.448 EAL: Heap on socket 0 was expanded by 2MB 00:06:25.448 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:25.448 EAL: No shared files mode enabled, IPC is disabled 00:06:25.448 EAL: Mem event callback 'spdk:(nil)' registered 00:06:25.448 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:25.448 00:06:25.448 00:06:25.448 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.448 http://cunit.sourceforge.net/ 00:06:25.448 00:06:25.448 00:06:25.448 Suite: components_suite 00:06:26.016 Test: vtophys_malloc_test ...passed 00:06:26.017 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.017 EAL: Restoring previous memory policy: 4 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was expanded by 4MB 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was shrunk by 4MB 00:06:26.017 EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.017 EAL: Restoring previous memory policy: 4 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was expanded by 6MB 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was shrunk by 6MB 00:06:26.017 EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.017 EAL: Restoring previous memory policy: 4 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was expanded by 10MB 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was shrunk by 10MB 00:06:26.017 EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.017 EAL: Restoring previous memory policy: 4 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was expanded by 18MB 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was shrunk by 18MB 00:06:26.017 EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.017 EAL: Restoring previous memory policy: 4 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was expanded by 34MB 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was shrunk by 34MB 00:06:26.017 EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.017 EAL: Restoring previous memory policy: 4 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was expanded by 66MB 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was shrunk by 66MB 00:06:26.017 EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.017 EAL: Restoring previous memory policy: 4 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was expanded by 130MB 00:06:26.017 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.017 EAL: request: mp_malloc_sync 00:06:26.017 EAL: No shared files mode enabled, IPC is disabled 00:06:26.017 EAL: Heap on socket 0 was shrunk by 130MB 00:06:26.017 EAL: Trying to obtain current memory policy. 00:06:26.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.276 EAL: Restoring previous memory policy: 4 00:06:26.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.276 EAL: request: mp_malloc_sync 00:06:26.276 EAL: No shared files mode enabled, IPC is disabled 00:06:26.276 EAL: Heap on socket 0 was expanded by 258MB 00:06:26.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.276 EAL: request: mp_malloc_sync 00:06:26.276 EAL: No shared files mode enabled, IPC is disabled 00:06:26.276 EAL: Heap on socket 0 was shrunk by 258MB 00:06:26.276 EAL: Trying to obtain current memory policy. 00:06:26.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.535 EAL: Restoring previous memory policy: 4 00:06:26.535 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.535 EAL: request: mp_malloc_sync 00:06:26.535 EAL: No shared files mode enabled, IPC is disabled 00:06:26.535 EAL: Heap on socket 0 was expanded by 514MB 00:06:26.535 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.535 EAL: request: mp_malloc_sync 00:06:26.535 EAL: No shared files mode enabled, IPC is disabled 00:06:26.535 EAL: Heap on socket 0 was shrunk by 514MB 00:06:26.535 EAL: Trying to obtain current memory policy. 00:06:26.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.794 EAL: Restoring previous memory policy: 4 00:06:26.794 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.794 EAL: request: mp_malloc_sync 00:06:26.794 EAL: No shared files mode enabled, IPC is disabled 00:06:26.794 EAL: Heap on socket 0 was expanded by 1026MB 00:06:27.053 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.312 passed 00:06:27.312 00:06:27.312 EAL: request: mp_malloc_sync 00:06:27.312 EAL: No shared files mode enabled, IPC is disabled 00:06:27.312 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:27.312 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.312 suites 1 1 n/a 0 0 00:06:27.312 tests 2 2 2 0 0 00:06:27.312 asserts 5246 5246 5246 0 n/a 00:06:27.312 00:06:27.312 Elapsed time = 1.874 seconds 00:06:27.312 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.312 EAL: request: mp_malloc_sync 00:06:27.312 EAL: No shared files mode enabled, IPC is disabled 00:06:27.312 EAL: Heap on socket 0 was shrunk by 2MB 00:06:27.312 EAL: No shared files mode enabled, IPC is disabled 00:06:27.312 EAL: No shared files mode enabled, IPC is disabled 00:06:27.312 EAL: No shared files mode enabled, IPC is disabled 00:06:27.312 00:06:27.312 real 0m2.155s 00:06:27.312 user 0m1.057s 00:06:27.312 sys 0m0.955s 00:06:27.312 ************************************ 00:06:27.312 END TEST env_vtophys 00:06:27.312 ************************************ 00:06:27.312 20:23:21 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.312 20:23:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:27.312 20:23:21 env -- common/autotest_common.sh@1142 -- # return 0 00:06:27.312 20:23:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:27.312 20:23:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.312 20:23:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.312 20:23:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.312 ************************************ 00:06:27.312 START TEST env_pci 00:06:27.312 ************************************ 00:06:27.312 20:23:21 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:27.571 00:06:27.571 00:06:27.571 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.571 http://cunit.sourceforge.net/ 00:06:27.571 00:06:27.571 00:06:27.571 Suite: pci 00:06:27.571 Test: pci_hook ...[2024-07-12 20:23:21.483521] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 75345 has claimed it 00:06:27.571 passed 00:06:27.571 00:06:27.571 EAL: Cannot find device (10000:00:01.0) 00:06:27.571 EAL: Failed to attach device on primary process 00:06:27.571 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.571 suites 1 1 n/a 0 0 00:06:27.571 tests 1 1 1 0 0 00:06:27.571 asserts 25 25 25 0 n/a 00:06:27.571 00:06:27.571 Elapsed time = 0.007 seconds 00:06:27.571 00:06:27.571 real 0m0.065s 00:06:27.571 user 0m0.023s 00:06:27.571 sys 0m0.042s 00:06:27.571 20:23:21 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.571 ************************************ 00:06:27.571 END TEST env_pci 00:06:27.571 ************************************ 00:06:27.571 20:23:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:27.571 20:23:21 env -- common/autotest_common.sh@1142 -- # return 0 00:06:27.571 20:23:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:27.571 20:23:21 env -- env/env.sh@15 -- # uname 00:06:27.571 20:23:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:27.571 20:23:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:27.571 20:23:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:27.571 20:23:21 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:27.571 20:23:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.571 20:23:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.571 ************************************ 00:06:27.571 START TEST env_dpdk_post_init 00:06:27.571 ************************************ 00:06:27.571 20:23:21 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:27.571 EAL: Detected CPU lcores: 10 00:06:27.571 EAL: Detected NUMA nodes: 1 00:06:27.571 EAL: Detected shared linkage of DPDK 00:06:27.571 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:27.571 EAL: Selected IOVA mode 'PA' 00:06:27.830 Starting DPDK initialization... 00:06:27.830 Starting SPDK post initialization... 00:06:27.830 SPDK NVMe probe 00:06:27.831 Attaching to 0000:00:10.0 00:06:27.831 Attaching to 0000:00:11.0 00:06:27.831 Attaching to 0000:00:12.0 00:06:27.831 Attaching to 0000:00:13.0 00:06:27.831 Attached to 0000:00:10.0 00:06:27.831 Attached to 0000:00:11.0 00:06:27.831 Attached to 0000:00:13.0 00:06:27.831 Attached to 0000:00:12.0 00:06:27.831 Cleaning up... 00:06:27.831 00:06:27.831 real 0m0.253s 00:06:27.831 user 0m0.073s 00:06:27.831 sys 0m0.084s 00:06:27.831 20:23:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.831 ************************************ 00:06:27.831 20:23:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:27.831 END TEST env_dpdk_post_init 00:06:27.831 ************************************ 00:06:27.831 20:23:21 env -- common/autotest_common.sh@1142 -- # return 0 00:06:27.831 20:23:21 env -- env/env.sh@26 -- # uname 00:06:27.831 20:23:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:27.831 20:23:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:27.831 20:23:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.831 20:23:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.831 20:23:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.831 ************************************ 00:06:27.831 START TEST env_mem_callbacks 00:06:27.831 ************************************ 00:06:27.831 20:23:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:27.831 EAL: Detected CPU lcores: 10 00:06:27.831 EAL: Detected NUMA nodes: 1 00:06:27.831 EAL: Detected shared linkage of DPDK 00:06:27.831 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:27.831 EAL: Selected IOVA mode 'PA' 00:06:28.089 00:06:28.089 00:06:28.089 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.089 http://cunit.sourceforge.net/ 00:06:28.089 00:06:28.089 00:06:28.089 Suite: memory 00:06:28.089 Test: test ... 00:06:28.089 register 0x200000200000 2097152 00:06:28.089 malloc 3145728 00:06:28.089 register 0x200000400000 4194304 00:06:28.089 buf 0x200000500000 len 3145728 PASSED 00:06:28.089 malloc 64 00:06:28.089 buf 0x2000004fff40 len 64 PASSED 00:06:28.089 malloc 4194304 00:06:28.089 register 0x200000800000 6291456 00:06:28.089 buf 0x200000a00000 len 4194304 PASSED 00:06:28.089 free 0x200000500000 3145728 00:06:28.089 free 0x2000004fff40 64 00:06:28.089 unregister 0x200000400000 4194304 PASSED 00:06:28.089 free 0x200000a00000 4194304 00:06:28.089 unregister 0x200000800000 6291456 PASSED 00:06:28.089 malloc 8388608 00:06:28.089 register 0x200000400000 10485760 00:06:28.089 buf 0x200000600000 len 8388608 PASSED 00:06:28.089 free 0x200000600000 8388608 00:06:28.089 unregister 0x200000400000 10485760 PASSED 00:06:28.089 passed 00:06:28.089 00:06:28.089 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.089 suites 1 1 n/a 0 0 00:06:28.089 tests 1 1 1 0 0 00:06:28.089 asserts 15 15 15 0 n/a 00:06:28.089 00:06:28.089 Elapsed time = 0.012 seconds 00:06:28.089 00:06:28.089 real 0m0.219s 00:06:28.089 user 0m0.045s 00:06:28.089 sys 0m0.071s 00:06:28.089 20:23:22 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.089 20:23:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:28.089 ************************************ 00:06:28.089 END TEST env_mem_callbacks 00:06:28.089 ************************************ 00:06:28.089 20:23:22 env -- common/autotest_common.sh@1142 -- # return 0 00:06:28.089 00:06:28.089 real 0m3.360s 00:06:28.089 user 0m1.616s 00:06:28.089 sys 0m1.382s 00:06:28.089 20:23:22 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.089 20:23:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:28.089 ************************************ 00:06:28.089 END TEST env 00:06:28.089 ************************************ 00:06:28.089 20:23:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.089 20:23:22 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:28.089 20:23:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.089 20:23:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.089 20:23:22 -- common/autotest_common.sh@10 -- # set +x 00:06:28.089 ************************************ 00:06:28.089 START TEST rpc 00:06:28.089 ************************************ 00:06:28.089 20:23:22 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:28.348 * Looking for test storage... 00:06:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:28.348 20:23:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=75464 00:06:28.348 20:23:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.348 20:23:22 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:28.348 20:23:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 75464 00:06:28.348 20:23:22 rpc -- common/autotest_common.sh@829 -- # '[' -z 75464 ']' 00:06:28.348 20:23:22 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.348 20:23:22 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.348 20:23:22 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.348 20:23:22 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.348 20:23:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.348 [2024-07-12 20:23:22.406224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:28.348 [2024-07-12 20:23:22.406443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75464 ] 00:06:28.606 [2024-07-12 20:23:22.563632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.606 [2024-07-12 20:23:22.585902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.606 [2024-07-12 20:23:22.670695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:28.606 [2024-07-12 20:23:22.670789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 75464' to capture a snapshot of events at runtime. 00:06:28.606 [2024-07-12 20:23:22.670815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.606 [2024-07-12 20:23:22.670831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.606 [2024-07-12 20:23:22.670854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid75464 for offline analysis/debug. 00:06:28.606 [2024-07-12 20:23:22.670892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.240 20:23:23 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.240 20:23:23 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.240 20:23:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:29.240 20:23:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:29.240 20:23:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:29.240 20:23:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:29.240 20:23:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.240 20:23:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.240 20:23:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.240 ************************************ 00:06:29.240 START TEST rpc_integrity 00:06:29.240 ************************************ 00:06:29.240 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:29.240 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:29.240 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.240 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.240 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.240 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:29.240 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:29.498 { 00:06:29.498 "name": "Malloc0", 00:06:29.498 "aliases": [ 00:06:29.498 "57414d61-8dd0-4d8f-bc5b-1a5376f49d2a" 00:06:29.498 ], 00:06:29.498 "product_name": "Malloc disk", 00:06:29.498 "block_size": 512, 00:06:29.498 "num_blocks": 16384, 00:06:29.498 "uuid": "57414d61-8dd0-4d8f-bc5b-1a5376f49d2a", 00:06:29.498 "assigned_rate_limits": { 00:06:29.498 "rw_ios_per_sec": 0, 00:06:29.498 "rw_mbytes_per_sec": 0, 00:06:29.498 "r_mbytes_per_sec": 0, 00:06:29.498 "w_mbytes_per_sec": 0 00:06:29.498 }, 00:06:29.498 "claimed": false, 00:06:29.498 "zoned": false, 00:06:29.498 "supported_io_types": { 00:06:29.498 "read": true, 00:06:29.498 "write": true, 00:06:29.498 "unmap": true, 00:06:29.498 "flush": true, 00:06:29.498 "reset": true, 00:06:29.498 "nvme_admin": false, 00:06:29.498 "nvme_io": false, 00:06:29.498 "nvme_io_md": false, 00:06:29.498 "write_zeroes": true, 00:06:29.498 "zcopy": true, 00:06:29.498 "get_zone_info": false, 00:06:29.498 "zone_management": false, 00:06:29.498 "zone_append": false, 00:06:29.498 "compare": false, 00:06:29.498 "compare_and_write": false, 00:06:29.498 "abort": true, 00:06:29.498 "seek_hole": false, 00:06:29.498 "seek_data": false, 00:06:29.498 "copy": true, 00:06:29.498 "nvme_iov_md": false 00:06:29.498 }, 00:06:29.498 "memory_domains": [ 00:06:29.498 { 00:06:29.498 "dma_device_id": "system", 00:06:29.498 "dma_device_type": 1 00:06:29.498 }, 00:06:29.498 { 00:06:29.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.498 "dma_device_type": 2 00:06:29.498 } 00:06:29.498 ], 00:06:29.498 "driver_specific": {} 00:06:29.498 } 00:06:29.498 ]' 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.498 [2024-07-12 20:23:23.503888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:29.498 [2024-07-12 20:23:23.503973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.498 [2024-07-12 20:23:23.504023] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:29.498 [2024-07-12 20:23:23.504053] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.498 [2024-07-12 20:23:23.507113] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.498 [2024-07-12 20:23:23.507164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:29.498 Passthru0 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.498 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.498 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:29.498 { 00:06:29.498 "name": "Malloc0", 00:06:29.498 "aliases": [ 00:06:29.498 "57414d61-8dd0-4d8f-bc5b-1a5376f49d2a" 00:06:29.498 ], 00:06:29.498 "product_name": "Malloc disk", 00:06:29.498 "block_size": 512, 00:06:29.498 "num_blocks": 16384, 00:06:29.498 "uuid": "57414d61-8dd0-4d8f-bc5b-1a5376f49d2a", 00:06:29.498 "assigned_rate_limits": { 00:06:29.498 "rw_ios_per_sec": 0, 00:06:29.498 "rw_mbytes_per_sec": 0, 00:06:29.498 "r_mbytes_per_sec": 0, 00:06:29.498 "w_mbytes_per_sec": 0 00:06:29.498 }, 00:06:29.498 "claimed": true, 00:06:29.498 "claim_type": "exclusive_write", 00:06:29.498 "zoned": false, 00:06:29.499 "supported_io_types": { 00:06:29.499 "read": true, 00:06:29.499 "write": true, 00:06:29.499 "unmap": true, 00:06:29.499 "flush": true, 00:06:29.499 "reset": true, 00:06:29.499 "nvme_admin": false, 00:06:29.499 "nvme_io": false, 00:06:29.499 "nvme_io_md": false, 00:06:29.499 "write_zeroes": true, 00:06:29.499 "zcopy": true, 00:06:29.499 "get_zone_info": false, 00:06:29.499 "zone_management": false, 00:06:29.499 "zone_append": false, 00:06:29.499 "compare": false, 00:06:29.499 "compare_and_write": false, 00:06:29.499 "abort": true, 00:06:29.499 "seek_hole": false, 00:06:29.499 "seek_data": false, 00:06:29.499 "copy": true, 00:06:29.499 "nvme_iov_md": false 00:06:29.499 }, 00:06:29.499 "memory_domains": [ 00:06:29.499 { 00:06:29.499 "dma_device_id": "system", 00:06:29.499 "dma_device_type": 1 00:06:29.499 }, 00:06:29.499 { 00:06:29.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.499 "dma_device_type": 2 00:06:29.499 } 00:06:29.499 ], 00:06:29.499 "driver_specific": {} 00:06:29.499 }, 00:06:29.499 { 00:06:29.499 "name": "Passthru0", 00:06:29.499 "aliases": [ 00:06:29.499 "488a3804-f21c-54ba-9798-c44a3e9b7310" 00:06:29.499 ], 00:06:29.499 "product_name": "passthru", 00:06:29.499 "block_size": 512, 00:06:29.499 "num_blocks": 16384, 00:06:29.499 "uuid": "488a3804-f21c-54ba-9798-c44a3e9b7310", 00:06:29.499 "assigned_rate_limits": { 00:06:29.499 "rw_ios_per_sec": 0, 00:06:29.499 "rw_mbytes_per_sec": 0, 00:06:29.499 "r_mbytes_per_sec": 0, 00:06:29.499 "w_mbytes_per_sec": 0 00:06:29.499 }, 00:06:29.499 "claimed": false, 00:06:29.499 "zoned": false, 00:06:29.499 "supported_io_types": { 00:06:29.499 "read": true, 00:06:29.499 "write": true, 00:06:29.499 "unmap": true, 00:06:29.499 "flush": true, 00:06:29.499 "reset": true, 00:06:29.499 "nvme_admin": false, 00:06:29.499 "nvme_io": false, 00:06:29.499 "nvme_io_md": false, 00:06:29.499 "write_zeroes": true, 00:06:29.499 "zcopy": true, 00:06:29.499 "get_zone_info": false, 00:06:29.499 "zone_management": false, 00:06:29.499 "zone_append": false, 00:06:29.499 "compare": false, 00:06:29.499 "compare_and_write": false, 00:06:29.499 "abort": true, 00:06:29.499 "seek_hole": false, 00:06:29.499 "seek_data": false, 00:06:29.499 "copy": true, 00:06:29.499 "nvme_iov_md": false 00:06:29.499 }, 00:06:29.499 "memory_domains": [ 00:06:29.499 { 00:06:29.499 "dma_device_id": "system", 00:06:29.499 "dma_device_type": 1 00:06:29.499 }, 00:06:29.499 { 00:06:29.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.499 "dma_device_type": 2 00:06:29.499 } 00:06:29.499 ], 00:06:29.499 "driver_specific": { 00:06:29.499 "passthru": { 00:06:29.499 "name": "Passthru0", 00:06:29.499 "base_bdev_name": "Malloc0" 00:06:29.499 } 00:06:29.499 } 00:06:29.499 } 00:06:29.499 ]' 00:06:29.499 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:29.499 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:29.499 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.499 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.499 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.499 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.499 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:29.499 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:29.758 ************************************ 00:06:29.758 END TEST rpc_integrity 00:06:29.758 ************************************ 00:06:29.758 20:23:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:29.758 00:06:29.758 real 0m0.311s 00:06:29.758 user 0m0.200s 00:06:29.758 sys 0m0.041s 00:06:29.758 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.758 20:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.758 20:23:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:29.758 20:23:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:29.758 20:23:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.758 20:23:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.758 20:23:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.758 ************************************ 00:06:29.758 START TEST rpc_plugins 00:06:29.758 ************************************ 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:29.758 { 00:06:29.758 "name": "Malloc1", 00:06:29.758 "aliases": [ 00:06:29.758 "10147558-6bab-4593-a376-44582ab65a52" 00:06:29.758 ], 00:06:29.758 "product_name": "Malloc disk", 00:06:29.758 "block_size": 4096, 00:06:29.758 "num_blocks": 256, 00:06:29.758 "uuid": "10147558-6bab-4593-a376-44582ab65a52", 00:06:29.758 "assigned_rate_limits": { 00:06:29.758 "rw_ios_per_sec": 0, 00:06:29.758 "rw_mbytes_per_sec": 0, 00:06:29.758 "r_mbytes_per_sec": 0, 00:06:29.758 "w_mbytes_per_sec": 0 00:06:29.758 }, 00:06:29.758 "claimed": false, 00:06:29.758 "zoned": false, 00:06:29.758 "supported_io_types": { 00:06:29.758 "read": true, 00:06:29.758 "write": true, 00:06:29.758 "unmap": true, 00:06:29.758 "flush": true, 00:06:29.758 "reset": true, 00:06:29.758 "nvme_admin": false, 00:06:29.758 "nvme_io": false, 00:06:29.758 "nvme_io_md": false, 00:06:29.758 "write_zeroes": true, 00:06:29.758 "zcopy": true, 00:06:29.758 "get_zone_info": false, 00:06:29.758 "zone_management": false, 00:06:29.758 "zone_append": false, 00:06:29.758 "compare": false, 00:06:29.758 "compare_and_write": false, 00:06:29.758 "abort": true, 00:06:29.758 "seek_hole": false, 00:06:29.758 "seek_data": false, 00:06:29.758 "copy": true, 00:06:29.758 "nvme_iov_md": false 00:06:29.758 }, 00:06:29.758 "memory_domains": [ 00:06:29.758 { 00:06:29.758 "dma_device_id": "system", 00:06:29.758 "dma_device_type": 1 00:06:29.758 }, 00:06:29.758 { 00:06:29.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.758 "dma_device_type": 2 00:06:29.758 } 00:06:29.758 ], 00:06:29.758 "driver_specific": {} 00:06:29.758 } 00:06:29.758 ]' 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:29.758 ************************************ 00:06:29.758 END TEST rpc_plugins 00:06:29.758 ************************************ 00:06:29.758 20:23:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:29.758 00:06:29.758 real 0m0.152s 00:06:29.758 user 0m0.099s 00:06:29.758 sys 0m0.015s 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.758 20:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.016 20:23:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:30.017 20:23:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:30.017 20:23:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.017 20:23:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.017 20:23:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.017 ************************************ 00:06:30.017 START TEST rpc_trace_cmd_test 00:06:30.017 ************************************ 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:30.017 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid75464", 00:06:30.017 "tpoint_group_mask": "0x8", 00:06:30.017 "iscsi_conn": { 00:06:30.017 "mask": "0x2", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "scsi": { 00:06:30.017 "mask": "0x4", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "bdev": { 00:06:30.017 "mask": "0x8", 00:06:30.017 "tpoint_mask": "0xffffffffffffffff" 00:06:30.017 }, 00:06:30.017 "nvmf_rdma": { 00:06:30.017 "mask": "0x10", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "nvmf_tcp": { 00:06:30.017 "mask": "0x20", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "ftl": { 00:06:30.017 "mask": "0x40", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "blobfs": { 00:06:30.017 "mask": "0x80", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "dsa": { 00:06:30.017 "mask": "0x200", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "thread": { 00:06:30.017 "mask": "0x400", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "nvme_pcie": { 00:06:30.017 "mask": "0x800", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "iaa": { 00:06:30.017 "mask": "0x1000", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "nvme_tcp": { 00:06:30.017 "mask": "0x2000", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "bdev_nvme": { 00:06:30.017 "mask": "0x4000", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 }, 00:06:30.017 "sock": { 00:06:30.017 "mask": "0x8000", 00:06:30.017 "tpoint_mask": "0x0" 00:06:30.017 } 00:06:30.017 }' 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:30.017 20:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:30.017 20:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:30.017 20:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:30.017 20:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:30.017 20:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:30.017 20:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:30.017 20:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:30.275 ************************************ 00:06:30.275 END TEST rpc_trace_cmd_test 00:06:30.275 ************************************ 00:06:30.275 20:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:30.275 00:06:30.275 real 0m0.264s 00:06:30.275 user 0m0.236s 00:06:30.275 sys 0m0.017s 00:06:30.275 20:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.275 20:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.275 20:23:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:30.275 20:23:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:30.275 20:23:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:30.275 20:23:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:30.275 20:23:24 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.275 20:23:24 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.275 20:23:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.275 ************************************ 00:06:30.275 START TEST rpc_daemon_integrity 00:06:30.275 ************************************ 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.275 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:30.275 { 00:06:30.275 "name": "Malloc2", 00:06:30.275 "aliases": [ 00:06:30.275 "79cb1656-008b-4cbe-af42-becdec02698b" 00:06:30.275 ], 00:06:30.275 "product_name": "Malloc disk", 00:06:30.275 "block_size": 512, 00:06:30.275 "num_blocks": 16384, 00:06:30.276 "uuid": "79cb1656-008b-4cbe-af42-becdec02698b", 00:06:30.276 "assigned_rate_limits": { 00:06:30.276 "rw_ios_per_sec": 0, 00:06:30.276 "rw_mbytes_per_sec": 0, 00:06:30.276 "r_mbytes_per_sec": 0, 00:06:30.276 "w_mbytes_per_sec": 0 00:06:30.276 }, 00:06:30.276 "claimed": false, 00:06:30.276 "zoned": false, 00:06:30.276 "supported_io_types": { 00:06:30.276 "read": true, 00:06:30.276 "write": true, 00:06:30.276 "unmap": true, 00:06:30.276 "flush": true, 00:06:30.276 "reset": true, 00:06:30.276 "nvme_admin": false, 00:06:30.276 "nvme_io": false, 00:06:30.276 "nvme_io_md": false, 00:06:30.276 "write_zeroes": true, 00:06:30.276 "zcopy": true, 00:06:30.276 "get_zone_info": false, 00:06:30.276 "zone_management": false, 00:06:30.276 "zone_append": false, 00:06:30.276 "compare": false, 00:06:30.276 "compare_and_write": false, 00:06:30.276 "abort": true, 00:06:30.276 "seek_hole": false, 00:06:30.276 "seek_data": false, 00:06:30.276 "copy": true, 00:06:30.276 "nvme_iov_md": false 00:06:30.276 }, 00:06:30.276 "memory_domains": [ 00:06:30.276 { 00:06:30.276 "dma_device_id": "system", 00:06:30.276 "dma_device_type": 1 00:06:30.276 }, 00:06:30.276 { 00:06:30.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.276 "dma_device_type": 2 00:06:30.276 } 00:06:30.276 ], 00:06:30.276 "driver_specific": {} 00:06:30.276 } 00:06:30.276 ]' 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.276 [2024-07-12 20:23:24.390413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:30.276 [2024-07-12 20:23:24.390474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.276 [2024-07-12 20:23:24.390506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:30.276 [2024-07-12 20:23:24.390523] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.276 [2024-07-12 20:23:24.393398] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.276 [2024-07-12 20:23:24.393441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:30.276 Passthru0 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.276 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:30.535 { 00:06:30.535 "name": "Malloc2", 00:06:30.535 "aliases": [ 00:06:30.535 "79cb1656-008b-4cbe-af42-becdec02698b" 00:06:30.535 ], 00:06:30.535 "product_name": "Malloc disk", 00:06:30.535 "block_size": 512, 00:06:30.535 "num_blocks": 16384, 00:06:30.535 "uuid": "79cb1656-008b-4cbe-af42-becdec02698b", 00:06:30.535 "assigned_rate_limits": { 00:06:30.535 "rw_ios_per_sec": 0, 00:06:30.535 "rw_mbytes_per_sec": 0, 00:06:30.535 "r_mbytes_per_sec": 0, 00:06:30.535 "w_mbytes_per_sec": 0 00:06:30.535 }, 00:06:30.535 "claimed": true, 00:06:30.535 "claim_type": "exclusive_write", 00:06:30.535 "zoned": false, 00:06:30.535 "supported_io_types": { 00:06:30.535 "read": true, 00:06:30.535 "write": true, 00:06:30.535 "unmap": true, 00:06:30.535 "flush": true, 00:06:30.535 "reset": true, 00:06:30.535 "nvme_admin": false, 00:06:30.535 "nvme_io": false, 00:06:30.535 "nvme_io_md": false, 00:06:30.535 "write_zeroes": true, 00:06:30.535 "zcopy": true, 00:06:30.535 "get_zone_info": false, 00:06:30.535 "zone_management": false, 00:06:30.535 "zone_append": false, 00:06:30.535 "compare": false, 00:06:30.535 "compare_and_write": false, 00:06:30.535 "abort": true, 00:06:30.535 "seek_hole": false, 00:06:30.535 "seek_data": false, 00:06:30.535 "copy": true, 00:06:30.535 "nvme_iov_md": false 00:06:30.535 }, 00:06:30.535 "memory_domains": [ 00:06:30.535 { 00:06:30.535 "dma_device_id": "system", 00:06:30.535 "dma_device_type": 1 00:06:30.535 }, 00:06:30.535 { 00:06:30.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.535 "dma_device_type": 2 00:06:30.535 } 00:06:30.535 ], 00:06:30.535 "driver_specific": {} 00:06:30.535 }, 00:06:30.535 { 00:06:30.535 "name": "Passthru0", 00:06:30.535 "aliases": [ 00:06:30.535 "ae94197c-ba0c-5e7f-8bbb-c38e270e2486" 00:06:30.535 ], 00:06:30.535 "product_name": "passthru", 00:06:30.535 "block_size": 512, 00:06:30.535 "num_blocks": 16384, 00:06:30.535 "uuid": "ae94197c-ba0c-5e7f-8bbb-c38e270e2486", 00:06:30.535 "assigned_rate_limits": { 00:06:30.535 "rw_ios_per_sec": 0, 00:06:30.535 "rw_mbytes_per_sec": 0, 00:06:30.535 "r_mbytes_per_sec": 0, 00:06:30.535 "w_mbytes_per_sec": 0 00:06:30.535 }, 00:06:30.535 "claimed": false, 00:06:30.535 "zoned": false, 00:06:30.535 "supported_io_types": { 00:06:30.535 "read": true, 00:06:30.535 "write": true, 00:06:30.535 "unmap": true, 00:06:30.535 "flush": true, 00:06:30.535 "reset": true, 00:06:30.535 "nvme_admin": false, 00:06:30.535 "nvme_io": false, 00:06:30.535 "nvme_io_md": false, 00:06:30.535 "write_zeroes": true, 00:06:30.535 "zcopy": true, 00:06:30.535 "get_zone_info": false, 00:06:30.535 "zone_management": false, 00:06:30.535 "zone_append": false, 00:06:30.535 "compare": false, 00:06:30.535 "compare_and_write": false, 00:06:30.535 "abort": true, 00:06:30.535 "seek_hole": false, 00:06:30.535 "seek_data": false, 00:06:30.535 "copy": true, 00:06:30.535 "nvme_iov_md": false 00:06:30.535 }, 00:06:30.535 "memory_domains": [ 00:06:30.535 { 00:06:30.535 "dma_device_id": "system", 00:06:30.535 "dma_device_type": 1 00:06:30.535 }, 00:06:30.535 { 00:06:30.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.535 "dma_device_type": 2 00:06:30.535 } 00:06:30.535 ], 00:06:30.535 "driver_specific": { 00:06:30.535 "passthru": { 00:06:30.535 "name": "Passthru0", 00:06:30.535 "base_bdev_name": "Malloc2" 00:06:30.535 } 00:06:30.535 } 00:06:30.535 } 00:06:30.535 ]' 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:30.535 ************************************ 00:06:30.535 END TEST rpc_daemon_integrity 00:06:30.535 ************************************ 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:30.535 00:06:30.535 real 0m0.315s 00:06:30.535 user 0m0.211s 00:06:30.535 sys 0m0.036s 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.535 20:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:30.535 20:23:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:30.535 20:23:24 rpc -- rpc/rpc.sh@84 -- # killprocess 75464 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@948 -- # '[' -z 75464 ']' 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@952 -- # kill -0 75464 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@953 -- # uname 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75464 00:06:30.535 killing process with pid 75464 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75464' 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@967 -- # kill 75464 00:06:30.535 20:23:24 rpc -- common/autotest_common.sh@972 -- # wait 75464 00:06:31.102 00:06:31.102 real 0m2.890s 00:06:31.102 user 0m3.571s 00:06:31.102 sys 0m0.820s 00:06:31.102 20:23:25 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.102 20:23:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.102 ************************************ 00:06:31.102 END TEST rpc 00:06:31.102 ************************************ 00:06:31.102 20:23:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.102 20:23:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:31.102 20:23:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.102 20:23:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.102 20:23:25 -- common/autotest_common.sh@10 -- # set +x 00:06:31.102 ************************************ 00:06:31.102 START TEST skip_rpc 00:06:31.102 ************************************ 00:06:31.102 20:23:25 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:31.102 * Looking for test storage... 00:06:31.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:31.102 20:23:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:31.102 20:23:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:31.102 20:23:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:31.102 20:23:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.102 20:23:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.102 20:23:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.102 ************************************ 00:06:31.102 START TEST skip_rpc 00:06:31.102 ************************************ 00:06:31.102 20:23:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:31.102 20:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=75660 00:06:31.102 20:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:31.102 20:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.102 20:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:31.362 [2024-07-12 20:23:25.323782] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:31.362 [2024-07-12 20:23:25.323952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75660 ] 00:06:31.362 [2024-07-12 20:23:25.469440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.362 [2024-07-12 20:23:25.487765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.620 [2024-07-12 20:23:25.566485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 75660 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 75660 ']' 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 75660 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75660 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.890 killing process with pid 75660 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75660' 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 75660 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 75660 00:06:36.890 00:06:36.890 real 0m5.478s 00:06:36.890 user 0m5.028s 00:06:36.890 sys 0m0.352s 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.890 20:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.890 ************************************ 00:06:36.890 END TEST skip_rpc 00:06:36.890 ************************************ 00:06:36.890 20:23:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:36.890 20:23:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:36.890 20:23:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.890 20:23:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.890 20:23:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.890 ************************************ 00:06:36.890 START TEST skip_rpc_with_json 00:06:36.890 ************************************ 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=75748 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 75748 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 75748 ']' 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.890 20:23:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.890 [2024-07-12 20:23:30.875978] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:36.890 [2024-07-12 20:23:30.876189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75748 ] 00:06:36.890 [2024-07-12 20:23:31.029644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.149 [2024-07-12 20:23:31.049984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.149 [2024-07-12 20:23:31.135791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.715 [2024-07-12 20:23:31.811910] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:37.715 request: 00:06:37.715 { 00:06:37.715 "trtype": "tcp", 00:06:37.715 "method": "nvmf_get_transports", 00:06:37.715 "req_id": 1 00:06:37.715 } 00:06:37.715 Got JSON-RPC error response 00:06:37.715 response: 00:06:37.715 { 00:06:37.715 "code": -19, 00:06:37.715 "message": "No such device" 00:06:37.715 } 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.715 [2024-07-12 20:23:31.824069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.715 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.974 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.974 20:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:37.974 { 00:06:37.974 "subsystems": [ 00:06:37.974 { 00:06:37.974 "subsystem": "keyring", 00:06:37.974 "config": [] 00:06:37.974 }, 00:06:37.974 { 00:06:37.974 "subsystem": "iobuf", 00:06:37.974 "config": [ 00:06:37.974 { 00:06:37.974 "method": "iobuf_set_options", 00:06:37.974 "params": { 00:06:37.974 "small_pool_count": 8192, 00:06:37.974 "large_pool_count": 1024, 00:06:37.974 "small_bufsize": 8192, 00:06:37.974 "large_bufsize": 135168 00:06:37.974 } 00:06:37.974 } 00:06:37.975 ] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "sock", 00:06:37.975 "config": [ 00:06:37.975 { 00:06:37.975 "method": "sock_set_default_impl", 00:06:37.975 "params": { 00:06:37.975 "impl_name": "posix" 00:06:37.975 } 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "method": "sock_impl_set_options", 00:06:37.975 "params": { 00:06:37.975 "impl_name": "ssl", 00:06:37.975 "recv_buf_size": 4096, 00:06:37.975 "send_buf_size": 4096, 00:06:37.975 "enable_recv_pipe": true, 00:06:37.975 "enable_quickack": false, 00:06:37.975 "enable_placement_id": 0, 00:06:37.975 "enable_zerocopy_send_server": true, 00:06:37.975 "enable_zerocopy_send_client": false, 00:06:37.975 "zerocopy_threshold": 0, 00:06:37.975 "tls_version": 0, 00:06:37.975 "enable_ktls": false 00:06:37.975 } 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "method": "sock_impl_set_options", 00:06:37.975 "params": { 00:06:37.975 "impl_name": "posix", 00:06:37.975 "recv_buf_size": 2097152, 00:06:37.975 "send_buf_size": 2097152, 00:06:37.975 "enable_recv_pipe": true, 00:06:37.975 "enable_quickack": false, 00:06:37.975 "enable_placement_id": 0, 00:06:37.975 "enable_zerocopy_send_server": true, 00:06:37.975 "enable_zerocopy_send_client": false, 00:06:37.975 "zerocopy_threshold": 0, 00:06:37.975 "tls_version": 0, 00:06:37.975 "enable_ktls": false 00:06:37.975 } 00:06:37.975 } 00:06:37.975 ] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "vmd", 00:06:37.975 "config": [] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "accel", 00:06:37.975 "config": [ 00:06:37.975 { 00:06:37.975 "method": "accel_set_options", 00:06:37.975 "params": { 00:06:37.975 "small_cache_size": 128, 00:06:37.975 "large_cache_size": 16, 00:06:37.975 "task_count": 2048, 00:06:37.975 "sequence_count": 2048, 00:06:37.975 "buf_count": 2048 00:06:37.975 } 00:06:37.975 } 00:06:37.975 ] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "bdev", 00:06:37.975 "config": [ 00:06:37.975 { 00:06:37.975 "method": "bdev_set_options", 00:06:37.975 "params": { 00:06:37.975 "bdev_io_pool_size": 65535, 00:06:37.975 "bdev_io_cache_size": 256, 00:06:37.975 "bdev_auto_examine": true, 00:06:37.975 "iobuf_small_cache_size": 128, 00:06:37.975 "iobuf_large_cache_size": 16 00:06:37.975 } 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "method": "bdev_raid_set_options", 00:06:37.975 "params": { 00:06:37.975 "process_window_size_kb": 1024 00:06:37.975 } 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "method": "bdev_iscsi_set_options", 00:06:37.975 "params": { 00:06:37.975 "timeout_sec": 30 00:06:37.975 } 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "method": "bdev_nvme_set_options", 00:06:37.975 "params": { 00:06:37.975 "action_on_timeout": "none", 00:06:37.975 "timeout_us": 0, 00:06:37.975 "timeout_admin_us": 0, 00:06:37.975 "keep_alive_timeout_ms": 10000, 00:06:37.975 "arbitration_burst": 0, 00:06:37.975 "low_priority_weight": 0, 00:06:37.975 "medium_priority_weight": 0, 00:06:37.975 "high_priority_weight": 0, 00:06:37.975 "nvme_adminq_poll_period_us": 10000, 00:06:37.975 "nvme_ioq_poll_period_us": 0, 00:06:37.975 "io_queue_requests": 0, 00:06:37.975 "delay_cmd_submit": true, 00:06:37.975 "transport_retry_count": 4, 00:06:37.975 "bdev_retry_count": 3, 00:06:37.975 "transport_ack_timeout": 0, 00:06:37.975 "ctrlr_loss_timeout_sec": 0, 00:06:37.975 "reconnect_delay_sec": 0, 00:06:37.975 "fast_io_fail_timeout_sec": 0, 00:06:37.975 "disable_auto_failback": false, 00:06:37.975 "generate_uuids": false, 00:06:37.975 "transport_tos": 0, 00:06:37.975 "nvme_error_stat": false, 00:06:37.975 "rdma_srq_size": 0, 00:06:37.975 "io_path_stat": false, 00:06:37.975 "allow_accel_sequence": false, 00:06:37.975 "rdma_max_cq_size": 0, 00:06:37.975 "rdma_cm_event_timeout_ms": 0, 00:06:37.975 "dhchap_digests": [ 00:06:37.975 "sha256", 00:06:37.975 "sha384", 00:06:37.975 "sha512" 00:06:37.975 ], 00:06:37.975 "dhchap_dhgroups": [ 00:06:37.975 "null", 00:06:37.975 "ffdhe2048", 00:06:37.975 "ffdhe3072", 00:06:37.975 "ffdhe4096", 00:06:37.975 "ffdhe6144", 00:06:37.975 "ffdhe8192" 00:06:37.975 ] 00:06:37.975 } 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "method": "bdev_nvme_set_hotplug", 00:06:37.975 "params": { 00:06:37.975 "period_us": 100000, 00:06:37.975 "enable": false 00:06:37.975 } 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "method": "bdev_wait_for_examine" 00:06:37.975 } 00:06:37.975 ] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "scsi", 00:06:37.975 "config": null 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "scheduler", 00:06:37.975 "config": [ 00:06:37.975 { 00:06:37.975 "method": "framework_set_scheduler", 00:06:37.975 "params": { 00:06:37.975 "name": "static" 00:06:37.975 } 00:06:37.975 } 00:06:37.975 ] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "vhost_scsi", 00:06:37.975 "config": [] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "vhost_blk", 00:06:37.975 "config": [] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "ublk", 00:06:37.975 "config": [] 00:06:37.975 }, 00:06:37.975 { 00:06:37.975 "subsystem": "nbd", 00:06:37.976 "config": [] 00:06:37.976 }, 00:06:37.976 { 00:06:37.976 "subsystem": "nvmf", 00:06:37.976 "config": [ 00:06:37.976 { 00:06:37.976 "method": "nvmf_set_config", 00:06:37.976 "params": { 00:06:37.976 "discovery_filter": "match_any", 00:06:37.976 "admin_cmd_passthru": { 00:06:37.976 "identify_ctrlr": false 00:06:37.976 } 00:06:37.976 } 00:06:37.976 }, 00:06:37.976 { 00:06:37.976 "method": "nvmf_set_max_subsystems", 00:06:37.976 "params": { 00:06:37.976 "max_subsystems": 1024 00:06:37.976 } 00:06:37.976 }, 00:06:37.976 { 00:06:37.976 "method": "nvmf_set_crdt", 00:06:37.976 "params": { 00:06:37.976 "crdt1": 0, 00:06:37.976 "crdt2": 0, 00:06:37.976 "crdt3": 0 00:06:37.976 } 00:06:37.976 }, 00:06:37.976 { 00:06:37.976 "method": "nvmf_create_transport", 00:06:37.976 "params": { 00:06:37.976 "trtype": "TCP", 00:06:37.976 "max_queue_depth": 128, 00:06:37.976 "max_io_qpairs_per_ctrlr": 127, 00:06:37.976 "in_capsule_data_size": 4096, 00:06:37.976 "max_io_size": 131072, 00:06:37.976 "io_unit_size": 131072, 00:06:37.976 "max_aq_depth": 128, 00:06:37.976 "num_shared_buffers": 511, 00:06:37.976 "buf_cache_size": 4294967295, 00:06:37.976 "dif_insert_or_strip": false, 00:06:37.976 "zcopy": false, 00:06:37.976 "c2h_success": true, 00:06:37.976 "sock_priority": 0, 00:06:37.976 "abort_timeout_sec": 1, 00:06:37.976 "ack_timeout": 0, 00:06:37.976 "data_wr_pool_size": 0 00:06:37.976 } 00:06:37.976 } 00:06:37.976 ] 00:06:37.976 }, 00:06:37.976 { 00:06:37.976 "subsystem": "iscsi", 00:06:37.976 "config": [ 00:06:37.976 { 00:06:37.976 "method": "iscsi_set_options", 00:06:37.976 "params": { 00:06:37.976 "node_base": "iqn.2016-06.io.spdk", 00:06:37.976 "max_sessions": 128, 00:06:37.976 "max_connections_per_session": 2, 00:06:37.976 "max_queue_depth": 64, 00:06:37.976 "default_time2wait": 2, 00:06:37.976 "default_time2retain": 20, 00:06:37.976 "first_burst_length": 8192, 00:06:37.976 "immediate_data": true, 00:06:37.976 "allow_duplicated_isid": false, 00:06:37.976 "error_recovery_level": 0, 00:06:37.976 "nop_timeout": 60, 00:06:37.976 "nop_in_interval": 30, 00:06:37.976 "disable_chap": false, 00:06:37.976 "require_chap": false, 00:06:37.976 "mutual_chap": false, 00:06:37.976 "chap_group": 0, 00:06:37.976 "max_large_datain_per_connection": 64, 00:06:37.976 "max_r2t_per_connection": 4, 00:06:37.976 "pdu_pool_size": 36864, 00:06:37.976 "immediate_data_pool_size": 16384, 00:06:37.976 "data_out_pool_size": 2048 00:06:37.976 } 00:06:37.976 } 00:06:37.976 ] 00:06:37.976 } 00:06:37.976 ] 00:06:37.976 } 00:06:37.976 20:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:37.976 20:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 75748 00:06:37.976 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 75748 ']' 00:06:37.976 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 75748 00:06:37.976 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:37.976 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.976 20:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75748 00:06:37.976 20:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.976 killing process with pid 75748 00:06:37.976 20:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.976 20:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75748' 00:06:37.976 20:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 75748 00:06:37.976 20:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 75748 00:06:38.543 20:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=75777 00:06:38.543 20:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:38.543 20:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 75777 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 75777 ']' 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 75777 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75777 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.817 killing process with pid 75777 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75777' 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 75777 00:06:43.817 20:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 75777 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:44.075 00:06:44.075 real 0m7.381s 00:06:44.075 user 0m6.840s 00:06:44.075 sys 0m0.903s 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.075 ************************************ 00:06:44.075 END TEST skip_rpc_with_json 00:06:44.075 ************************************ 00:06:44.075 20:23:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:44.075 20:23:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:44.075 20:23:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.075 20:23:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.075 20:23:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.075 ************************************ 00:06:44.075 START TEST skip_rpc_with_delay 00:06:44.075 ************************************ 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:44.075 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:44.334 [2024-07-12 20:23:38.304991] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:44.334 [2024-07-12 20:23:38.305208] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:44.334 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:44.334 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.334 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.334 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.334 00:06:44.334 real 0m0.185s 00:06:44.334 user 0m0.101s 00:06:44.334 sys 0m0.082s 00:06:44.334 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.334 20:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:44.334 ************************************ 00:06:44.334 END TEST skip_rpc_with_delay 00:06:44.334 ************************************ 00:06:44.334 20:23:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:44.334 20:23:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:44.334 20:23:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:44.334 20:23:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:44.334 20:23:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.334 20:23:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.334 20:23:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.334 ************************************ 00:06:44.334 START TEST exit_on_failed_rpc_init 00:06:44.334 ************************************ 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=75896 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 75896 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 75896 ']' 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.334 20:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:44.593 [2024-07-12 20:23:38.521515] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:44.593 [2024-07-12 20:23:38.521668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75896 ] 00:06:44.593 [2024-07-12 20:23:38.666097] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.593 [2024-07-12 20:23:38.686647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.851 [2024-07-12 20:23:38.806674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:45.450 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.450 [2024-07-12 20:23:39.574636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:45.450 [2024-07-12 20:23:39.574873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75914 ] 00:06:45.708 [2024-07-12 20:23:39.729837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.708 [2024-07-12 20:23:39.751883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.708 [2024-07-12 20:23:39.855863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.708 [2024-07-12 20:23:39.856001] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:45.708 [2024-07-12 20:23:39.856037] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:45.708 [2024-07-12 20:23:39.856063] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 75896 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 75896 ']' 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 75896 00:06:45.966 20:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:45.966 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.966 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75896 00:06:45.966 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.966 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.966 killing process with pid 75896 00:06:45.966 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75896' 00:06:45.966 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 75896 00:06:45.966 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 75896 00:06:46.534 00:06:46.534 real 0m2.068s 00:06:46.534 user 0m2.262s 00:06:46.534 sys 0m0.662s 00:06:46.534 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.534 20:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:46.534 ************************************ 00:06:46.534 END TEST exit_on_failed_rpc_init 00:06:46.534 ************************************ 00:06:46.534 20:23:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:46.534 20:23:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:46.534 00:06:46.534 real 0m15.406s 00:06:46.534 user 0m14.326s 00:06:46.534 sys 0m2.184s 00:06:46.534 20:23:40 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.534 20:23:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.534 ************************************ 00:06:46.534 END TEST skip_rpc 00:06:46.534 ************************************ 00:06:46.534 20:23:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:46.534 20:23:40 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:46.534 20:23:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.534 20:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.534 20:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:46.534 ************************************ 00:06:46.534 START TEST rpc_client 00:06:46.534 ************************************ 00:06:46.534 20:23:40 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:46.534 * Looking for test storage... 00:06:46.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:46.534 20:23:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:46.794 OK 00:06:46.794 20:23:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:46.794 00:06:46.794 real 0m0.139s 00:06:46.794 user 0m0.064s 00:06:46.794 sys 0m0.083s 00:06:46.794 20:23:40 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.794 20:23:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:46.794 ************************************ 00:06:46.794 END TEST rpc_client 00:06:46.794 ************************************ 00:06:46.794 20:23:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:46.794 20:23:40 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:46.794 20:23:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.794 20:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.794 20:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:46.794 ************************************ 00:06:46.794 START TEST json_config 00:06:46.794 ************************************ 00:06:46.794 20:23:40 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fe19b54-c671-479d-b03d-8ff2c5be0c37 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2fe19b54-c671-479d-b03d-8ff2c5be0c37 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.794 20:23:40 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.794 20:23:40 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.794 20:23:40 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.794 20:23:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.794 20:23:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.794 20:23:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.794 20:23:40 json_config -- paths/export.sh@5 -- # export PATH 00:06:46.794 20:23:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@47 -- # : 0 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.794 20:23:40 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:46.794 WARNING: No tests are enabled so not running JSON configuration tests 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:46.794 20:23:40 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:46.794 00:06:46.794 real 0m0.075s 00:06:46.794 user 0m0.031s 00:06:46.794 sys 0m0.044s 00:06:46.794 20:23:40 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.794 20:23:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.794 ************************************ 00:06:46.794 END TEST json_config 00:06:46.794 ************************************ 00:06:46.794 20:23:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:46.794 20:23:40 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:46.794 20:23:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.794 20:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.794 20:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:46.794 ************************************ 00:06:46.794 START TEST json_config_extra_key 00:06:46.794 ************************************ 00:06:46.794 20:23:40 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:47.053 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2fe19b54-c671-479d-b03d-8ff2c5be0c37 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2fe19b54-c671-479d-b03d-8ff2c5be0c37 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.053 20:23:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.053 20:23:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.053 20:23:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.053 20:23:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.053 20:23:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.053 20:23:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.053 20:23:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:47.053 20:23:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.053 20:23:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.053 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:47.053 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:47.053 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:47.053 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:47.053 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:47.054 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:47.054 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:47.054 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:47.054 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:47.054 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:47.054 INFO: launching applications... 00:06:47.054 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:47.054 20:23:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=76071 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:47.054 Waiting for target to run... 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 76071 /var/tmp/spdk_tgt.sock 00:06:47.054 20:23:40 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 76071 ']' 00:06:47.054 20:23:40 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:47.054 20:23:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:47.054 20:23:40 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:47.054 20:23:40 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:47.054 20:23:40 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.054 20:23:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:47.054 [2024-07-12 20:23:41.104956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:47.054 [2024-07-12 20:23:41.105186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76071 ] 00:06:47.620 [2024-07-12 20:23:41.569336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.620 [2024-07-12 20:23:41.593992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.620 [2024-07-12 20:23:41.668126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.879 20:23:41 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.879 00:06:47.879 20:23:41 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:47.879 INFO: shutting down applications... 00:06:47.879 20:23:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:47.879 20:23:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 76071 ]] 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 76071 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 76071 00:06:47.879 20:23:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.444 20:23:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.444 20:23:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.444 20:23:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 76071 00:06:48.444 20:23:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:48.444 20:23:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:48.444 SPDK target shutdown done 00:06:48.444 Success 00:06:48.444 20:23:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:48.444 20:23:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:48.444 20:23:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:48.444 00:06:48.444 real 0m1.567s 00:06:48.444 user 0m1.336s 00:06:48.444 sys 0m0.558s 00:06:48.444 ************************************ 00:06:48.444 END TEST json_config_extra_key 00:06:48.444 ************************************ 00:06:48.444 20:23:42 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.444 20:23:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.444 20:23:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.444 20:23:42 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.444 20:23:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.444 20:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.444 20:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:48.444 ************************************ 00:06:48.444 START TEST alias_rpc 00:06:48.444 ************************************ 00:06:48.444 20:23:42 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.444 * Looking for test storage... 00:06:48.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:48.703 20:23:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.703 20:23:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=76138 00:06:48.703 20:23:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.703 20:23:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 76138 00:06:48.703 20:23:42 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 76138 ']' 00:06:48.703 20:23:42 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.703 20:23:42 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.703 20:23:42 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.703 20:23:42 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.703 20:23:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.703 [2024-07-12 20:23:42.715531] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:48.703 [2024-07-12 20:23:42.715958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76138 ] 00:06:48.960 [2024-07-12 20:23:42.868092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.960 [2024-07-12 20:23:42.890625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.960 [2024-07-12 20:23:42.989274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.575 20:23:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.575 20:23:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.575 20:23:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:49.833 20:23:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 76138 00:06:49.833 20:23:43 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 76138 ']' 00:06:49.833 20:23:43 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 76138 00:06:49.833 20:23:43 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:49.833 20:23:43 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.833 20:23:43 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76138 00:06:50.092 killing process with pid 76138 00:06:50.092 20:23:43 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.092 20:23:43 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.092 20:23:43 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76138' 00:06:50.092 20:23:43 alias_rpc -- common/autotest_common.sh@967 -- # kill 76138 00:06:50.092 20:23:43 alias_rpc -- common/autotest_common.sh@972 -- # wait 76138 00:06:50.351 ************************************ 00:06:50.351 END TEST alias_rpc 00:06:50.351 ************************************ 00:06:50.351 00:06:50.351 real 0m1.921s 00:06:50.351 user 0m2.133s 00:06:50.351 sys 0m0.512s 00:06:50.351 20:23:44 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.351 20:23:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.351 20:23:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.351 20:23:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:50.351 20:23:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:50.351 20:23:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.351 20:23:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.351 20:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:50.351 ************************************ 00:06:50.351 START TEST spdkcli_tcp 00:06:50.351 ************************************ 00:06:50.351 20:23:44 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:50.610 * Looking for test storage... 00:06:50.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=76215 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 76215 00:06:50.610 20:23:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 76215 ']' 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.610 20:23:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.610 [2024-07-12 20:23:44.705171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:50.610 [2024-07-12 20:23:44.705425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76215 ] 00:06:50.869 [2024-07-12 20:23:44.860810] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.869 [2024-07-12 20:23:44.880350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.869 [2024-07-12 20:23:44.985342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.869 [2024-07-12 20:23:44.985406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.806 20:23:45 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.806 20:23:45 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:51.806 20:23:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=76232 00:06:51.806 20:23:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:51.806 20:23:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:51.806 [ 00:06:51.806 "bdev_malloc_delete", 00:06:51.806 "bdev_malloc_create", 00:06:51.806 "bdev_null_resize", 00:06:51.806 "bdev_null_delete", 00:06:51.806 "bdev_null_create", 00:06:51.806 "bdev_nvme_cuse_unregister", 00:06:51.806 "bdev_nvme_cuse_register", 00:06:51.806 "bdev_opal_new_user", 00:06:51.806 "bdev_opal_set_lock_state", 00:06:51.806 "bdev_opal_delete", 00:06:51.806 "bdev_opal_get_info", 00:06:51.806 "bdev_opal_create", 00:06:51.806 "bdev_nvme_opal_revert", 00:06:51.806 "bdev_nvme_opal_init", 00:06:51.806 "bdev_nvme_send_cmd", 00:06:51.806 "bdev_nvme_get_path_iostat", 00:06:51.806 "bdev_nvme_get_mdns_discovery_info", 00:06:51.806 "bdev_nvme_stop_mdns_discovery", 00:06:51.806 "bdev_nvme_start_mdns_discovery", 00:06:51.806 "bdev_nvme_set_multipath_policy", 00:06:51.806 "bdev_nvme_set_preferred_path", 00:06:51.806 "bdev_nvme_get_io_paths", 00:06:51.806 "bdev_nvme_remove_error_injection", 00:06:51.806 "bdev_nvme_add_error_injection", 00:06:51.806 "bdev_nvme_get_discovery_info", 00:06:51.806 "bdev_nvme_stop_discovery", 00:06:51.806 "bdev_nvme_start_discovery", 00:06:51.806 "bdev_nvme_get_controller_health_info", 00:06:51.806 "bdev_nvme_disable_controller", 00:06:51.806 "bdev_nvme_enable_controller", 00:06:51.806 "bdev_nvme_reset_controller", 00:06:51.806 "bdev_nvme_get_transport_statistics", 00:06:51.806 "bdev_nvme_apply_firmware", 00:06:51.806 "bdev_nvme_detach_controller", 00:06:51.806 "bdev_nvme_get_controllers", 00:06:51.806 "bdev_nvme_attach_controller", 00:06:51.806 "bdev_nvme_set_hotplug", 00:06:51.806 "bdev_nvme_set_options", 00:06:51.806 "bdev_passthru_delete", 00:06:51.806 "bdev_passthru_create", 00:06:51.806 "bdev_lvol_set_parent_bdev", 00:06:51.806 "bdev_lvol_set_parent", 00:06:51.806 "bdev_lvol_check_shallow_copy", 00:06:51.806 "bdev_lvol_start_shallow_copy", 00:06:51.806 "bdev_lvol_grow_lvstore", 00:06:51.806 "bdev_lvol_get_lvols", 00:06:51.806 "bdev_lvol_get_lvstores", 00:06:51.806 "bdev_lvol_delete", 00:06:51.806 "bdev_lvol_set_read_only", 00:06:51.806 "bdev_lvol_resize", 00:06:51.806 "bdev_lvol_decouple_parent", 00:06:51.806 "bdev_lvol_inflate", 00:06:51.806 "bdev_lvol_rename", 00:06:51.806 "bdev_lvol_clone_bdev", 00:06:51.806 "bdev_lvol_clone", 00:06:51.806 "bdev_lvol_snapshot", 00:06:51.806 "bdev_lvol_create", 00:06:51.806 "bdev_lvol_delete_lvstore", 00:06:51.806 "bdev_lvol_rename_lvstore", 00:06:51.806 "bdev_lvol_create_lvstore", 00:06:51.806 "bdev_raid_set_options", 00:06:51.806 "bdev_raid_remove_base_bdev", 00:06:51.806 "bdev_raid_add_base_bdev", 00:06:51.806 "bdev_raid_delete", 00:06:51.806 "bdev_raid_create", 00:06:51.806 "bdev_raid_get_bdevs", 00:06:51.806 "bdev_error_inject_error", 00:06:51.806 "bdev_error_delete", 00:06:51.806 "bdev_error_create", 00:06:51.806 "bdev_split_delete", 00:06:51.806 "bdev_split_create", 00:06:51.806 "bdev_delay_delete", 00:06:51.806 "bdev_delay_create", 00:06:51.806 "bdev_delay_update_latency", 00:06:51.806 "bdev_zone_block_delete", 00:06:51.806 "bdev_zone_block_create", 00:06:51.806 "blobfs_create", 00:06:51.806 "blobfs_detect", 00:06:51.806 "blobfs_set_cache_size", 00:06:51.806 "bdev_xnvme_delete", 00:06:51.806 "bdev_xnvme_create", 00:06:51.806 "bdev_aio_delete", 00:06:51.806 "bdev_aio_rescan", 00:06:51.806 "bdev_aio_create", 00:06:51.806 "bdev_ftl_set_property", 00:06:51.806 "bdev_ftl_get_properties", 00:06:51.806 "bdev_ftl_get_stats", 00:06:51.806 "bdev_ftl_unmap", 00:06:51.806 "bdev_ftl_unload", 00:06:51.806 "bdev_ftl_delete", 00:06:51.806 "bdev_ftl_load", 00:06:51.806 "bdev_ftl_create", 00:06:51.806 "bdev_virtio_attach_controller", 00:06:51.806 "bdev_virtio_scsi_get_devices", 00:06:51.806 "bdev_virtio_detach_controller", 00:06:51.806 "bdev_virtio_blk_set_hotplug", 00:06:51.806 "bdev_iscsi_delete", 00:06:51.806 "bdev_iscsi_create", 00:06:51.806 "bdev_iscsi_set_options", 00:06:51.806 "accel_error_inject_error", 00:06:51.806 "ioat_scan_accel_module", 00:06:51.806 "dsa_scan_accel_module", 00:06:51.806 "iaa_scan_accel_module", 00:06:51.806 "keyring_file_remove_key", 00:06:51.806 "keyring_file_add_key", 00:06:51.806 "keyring_linux_set_options", 00:06:51.806 "iscsi_get_histogram", 00:06:51.806 "iscsi_enable_histogram", 00:06:51.806 "iscsi_set_options", 00:06:51.806 "iscsi_get_auth_groups", 00:06:51.806 "iscsi_auth_group_remove_secret", 00:06:51.806 "iscsi_auth_group_add_secret", 00:06:51.806 "iscsi_delete_auth_group", 00:06:51.806 "iscsi_create_auth_group", 00:06:51.806 "iscsi_set_discovery_auth", 00:06:51.806 "iscsi_get_options", 00:06:51.807 "iscsi_target_node_request_logout", 00:06:51.807 "iscsi_target_node_set_redirect", 00:06:51.807 "iscsi_target_node_set_auth", 00:06:51.807 "iscsi_target_node_add_lun", 00:06:51.807 "iscsi_get_stats", 00:06:51.807 "iscsi_get_connections", 00:06:51.807 "iscsi_portal_group_set_auth", 00:06:51.807 "iscsi_start_portal_group", 00:06:51.807 "iscsi_delete_portal_group", 00:06:51.807 "iscsi_create_portal_group", 00:06:51.807 "iscsi_get_portal_groups", 00:06:51.807 "iscsi_delete_target_node", 00:06:51.807 "iscsi_target_node_remove_pg_ig_maps", 00:06:51.807 "iscsi_target_node_add_pg_ig_maps", 00:06:51.807 "iscsi_create_target_node", 00:06:51.807 "iscsi_get_target_nodes", 00:06:51.807 "iscsi_delete_initiator_group", 00:06:51.807 "iscsi_initiator_group_remove_initiators", 00:06:51.807 "iscsi_initiator_group_add_initiators", 00:06:51.807 "iscsi_create_initiator_group", 00:06:51.807 "iscsi_get_initiator_groups", 00:06:51.807 "nvmf_set_crdt", 00:06:51.807 "nvmf_set_config", 00:06:51.807 "nvmf_set_max_subsystems", 00:06:51.807 "nvmf_stop_mdns_prr", 00:06:51.807 "nvmf_publish_mdns_prr", 00:06:51.807 "nvmf_subsystem_get_listeners", 00:06:51.807 "nvmf_subsystem_get_qpairs", 00:06:51.807 "nvmf_subsystem_get_controllers", 00:06:51.807 "nvmf_get_stats", 00:06:51.807 "nvmf_get_transports", 00:06:51.807 "nvmf_create_transport", 00:06:51.807 "nvmf_get_targets", 00:06:51.807 "nvmf_delete_target", 00:06:51.807 "nvmf_create_target", 00:06:51.807 "nvmf_subsystem_allow_any_host", 00:06:51.807 "nvmf_subsystem_remove_host", 00:06:51.807 "nvmf_subsystem_add_host", 00:06:51.807 "nvmf_ns_remove_host", 00:06:51.807 "nvmf_ns_add_host", 00:06:51.807 "nvmf_subsystem_remove_ns", 00:06:51.807 "nvmf_subsystem_add_ns", 00:06:51.807 "nvmf_subsystem_listener_set_ana_state", 00:06:51.807 "nvmf_discovery_get_referrals", 00:06:51.807 "nvmf_discovery_remove_referral", 00:06:51.807 "nvmf_discovery_add_referral", 00:06:51.807 "nvmf_subsystem_remove_listener", 00:06:51.807 "nvmf_subsystem_add_listener", 00:06:51.807 "nvmf_delete_subsystem", 00:06:51.807 "nvmf_create_subsystem", 00:06:51.807 "nvmf_get_subsystems", 00:06:51.807 "env_dpdk_get_mem_stats", 00:06:51.807 "nbd_get_disks", 00:06:51.807 "nbd_stop_disk", 00:06:51.807 "nbd_start_disk", 00:06:51.807 "ublk_recover_disk", 00:06:51.807 "ublk_get_disks", 00:06:51.807 "ublk_stop_disk", 00:06:51.807 "ublk_start_disk", 00:06:51.807 "ublk_destroy_target", 00:06:51.807 "ublk_create_target", 00:06:51.807 "virtio_blk_create_transport", 00:06:51.807 "virtio_blk_get_transports", 00:06:51.807 "vhost_controller_set_coalescing", 00:06:51.807 "vhost_get_controllers", 00:06:51.807 "vhost_delete_controller", 00:06:51.807 "vhost_create_blk_controller", 00:06:51.807 "vhost_scsi_controller_remove_target", 00:06:51.807 "vhost_scsi_controller_add_target", 00:06:51.807 "vhost_start_scsi_controller", 00:06:51.807 "vhost_create_scsi_controller", 00:06:51.807 "thread_set_cpumask", 00:06:51.807 "framework_get_governor", 00:06:51.807 "framework_get_scheduler", 00:06:51.807 "framework_set_scheduler", 00:06:51.807 "framework_get_reactors", 00:06:51.807 "thread_get_io_channels", 00:06:51.807 "thread_get_pollers", 00:06:51.807 "thread_get_stats", 00:06:51.807 "framework_monitor_context_switch", 00:06:51.807 "spdk_kill_instance", 00:06:51.807 "log_enable_timestamps", 00:06:51.807 "log_get_flags", 00:06:51.807 "log_clear_flag", 00:06:51.807 "log_set_flag", 00:06:51.807 "log_get_level", 00:06:51.807 "log_set_level", 00:06:51.807 "log_get_print_level", 00:06:51.807 "log_set_print_level", 00:06:51.807 "framework_enable_cpumask_locks", 00:06:51.807 "framework_disable_cpumask_locks", 00:06:51.807 "framework_wait_init", 00:06:51.807 "framework_start_init", 00:06:51.807 "scsi_get_devices", 00:06:51.807 "bdev_get_histogram", 00:06:51.807 "bdev_enable_histogram", 00:06:51.807 "bdev_set_qos_limit", 00:06:51.807 "bdev_set_qd_sampling_period", 00:06:51.807 "bdev_get_bdevs", 00:06:51.807 "bdev_reset_iostat", 00:06:51.807 "bdev_get_iostat", 00:06:51.807 "bdev_examine", 00:06:51.807 "bdev_wait_for_examine", 00:06:51.807 "bdev_set_options", 00:06:51.807 "notify_get_notifications", 00:06:51.807 "notify_get_types", 00:06:51.807 "accel_get_stats", 00:06:51.807 "accel_set_options", 00:06:51.807 "accel_set_driver", 00:06:51.807 "accel_crypto_key_destroy", 00:06:51.807 "accel_crypto_keys_get", 00:06:51.807 "accel_crypto_key_create", 00:06:51.807 "accel_assign_opc", 00:06:51.807 "accel_get_module_info", 00:06:51.807 "accel_get_opc_assignments", 00:06:51.807 "vmd_rescan", 00:06:51.807 "vmd_remove_device", 00:06:51.807 "vmd_enable", 00:06:51.807 "sock_get_default_impl", 00:06:51.807 "sock_set_default_impl", 00:06:51.807 "sock_impl_set_options", 00:06:51.807 "sock_impl_get_options", 00:06:51.807 "iobuf_get_stats", 00:06:51.807 "iobuf_set_options", 00:06:51.807 "framework_get_pci_devices", 00:06:51.807 "framework_get_config", 00:06:51.807 "framework_get_subsystems", 00:06:51.807 "trace_get_info", 00:06:51.807 "trace_get_tpoint_group_mask", 00:06:51.807 "trace_disable_tpoint_group", 00:06:51.807 "trace_enable_tpoint_group", 00:06:51.807 "trace_clear_tpoint_mask", 00:06:51.807 "trace_set_tpoint_mask", 00:06:51.807 "keyring_get_keys", 00:06:51.807 "spdk_get_version", 00:06:51.807 "rpc_get_methods" 00:06:51.807 ] 00:06:52.066 20:23:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:52.066 20:23:45 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:52.066 20:23:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.066 20:23:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:52.066 20:23:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 76215 00:06:52.066 20:23:45 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 76215 ']' 00:06:52.066 20:23:45 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 76215 00:06:52.066 20:23:45 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:52.066 20:23:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.066 20:23:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76215 00:06:52.066 killing process with pid 76215 00:06:52.066 20:23:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.066 20:23:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.066 20:23:46 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76215' 00:06:52.066 20:23:46 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 76215 00:06:52.066 20:23:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 76215 00:06:52.634 ************************************ 00:06:52.634 END TEST spdkcli_tcp 00:06:52.634 ************************************ 00:06:52.634 00:06:52.634 real 0m2.004s 00:06:52.634 user 0m3.565s 00:06:52.634 sys 0m0.609s 00:06:52.634 20:23:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.634 20:23:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.634 20:23:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.634 20:23:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:52.634 20:23:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.634 20:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.634 20:23:46 -- common/autotest_common.sh@10 -- # set +x 00:06:52.634 ************************************ 00:06:52.634 START TEST dpdk_mem_utility 00:06:52.634 ************************************ 00:06:52.634 20:23:46 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:52.634 * Looking for test storage... 00:06:52.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:52.634 20:23:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:52.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.634 20:23:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=76307 00:06:52.634 20:23:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 76307 00:06:52.634 20:23:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.634 20:23:46 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 76307 ']' 00:06:52.634 20:23:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.634 20:23:46 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.634 20:23:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.634 20:23:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.634 20:23:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:52.634 [2024-07-12 20:23:46.735937] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:52.634 [2024-07-12 20:23:46.736123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76307 ] 00:06:52.894 [2024-07-12 20:23:46.880173] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.894 [2024-07-12 20:23:46.902918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.894 [2024-07-12 20:23:47.004940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.833 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.833 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:53.833 20:23:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:53.833 20:23:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:53.833 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.833 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:53.833 { 00:06:53.833 "filename": "/tmp/spdk_mem_dump.txt" 00:06:53.833 } 00:06:53.833 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.833 20:23:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:53.833 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:53.833 1 heaps totaling size 814.000000 MiB 00:06:53.833 size: 814.000000 MiB heap id: 0 00:06:53.833 end heaps---------- 00:06:53.833 8 mempools totaling size 598.116089 MiB 00:06:53.833 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:53.833 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:53.833 size: 84.521057 MiB name: bdev_io_76307 00:06:53.833 size: 51.011292 MiB name: evtpool_76307 00:06:53.833 size: 50.003479 MiB name: msgpool_76307 00:06:53.833 size: 21.763794 MiB name: PDU_Pool 00:06:53.833 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:53.834 size: 0.026123 MiB name: Session_Pool 00:06:53.834 end mempools------- 00:06:53.834 6 memzones totaling size 4.142822 MiB 00:06:53.834 size: 1.000366 MiB name: RG_ring_0_76307 00:06:53.834 size: 1.000366 MiB name: RG_ring_1_76307 00:06:53.834 size: 1.000366 MiB name: RG_ring_4_76307 00:06:53.834 size: 1.000366 MiB name: RG_ring_5_76307 00:06:53.834 size: 0.125366 MiB name: RG_ring_2_76307 00:06:53.834 size: 0.015991 MiB name: RG_ring_3_76307 00:06:53.834 end memzones------- 00:06:53.834 20:23:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:53.834 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:06:53.834 list of free elements. size: 12.472290 MiB 00:06:53.834 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:53.834 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:53.834 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:53.834 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:53.834 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:53.834 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:53.834 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:53.834 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:53.834 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:53.834 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:06:53.834 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:53.834 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:53.834 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:53.834 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:53.834 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:53.834 list of standard malloc elements. size: 199.265137 MiB 00:06:53.834 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:53.834 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:53.834 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:53.834 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:53.834 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:53.834 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:53.834 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:53.834 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:53.834 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:53.834 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:53.834 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:53.835 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:53.835 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:53.836 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:53.836 list of memzone associated elements. size: 602.262573 MiB 00:06:53.836 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:53.836 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:53.836 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:53.836 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:53.836 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:53.836 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_76307_0 00:06:53.836 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:53.836 associated memzone info: size: 48.002930 MiB name: MP_evtpool_76307_0 00:06:53.836 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:53.836 associated memzone info: size: 48.002930 MiB name: MP_msgpool_76307_0 00:06:53.836 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:53.836 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:53.836 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:53.836 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:53.836 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:53.836 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_76307 00:06:53.836 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:53.836 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_76307 00:06:53.836 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:53.836 associated memzone info: size: 1.007996 MiB name: MP_evtpool_76307 00:06:53.836 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:53.836 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:53.836 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:53.836 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:53.836 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:53.836 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:53.836 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:53.836 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:53.836 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:53.836 associated memzone info: size: 1.000366 MiB name: RG_ring_0_76307 00:06:53.836 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:53.836 associated memzone info: size: 1.000366 MiB name: RG_ring_1_76307 00:06:53.836 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:53.836 associated memzone info: size: 1.000366 MiB name: RG_ring_4_76307 00:06:53.836 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:53.836 associated memzone info: size: 1.000366 MiB name: RG_ring_5_76307 00:06:53.836 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:53.836 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_76307 00:06:53.836 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:53.836 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:53.836 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:53.836 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:53.836 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:53.836 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:53.836 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:53.836 associated memzone info: size: 0.125366 MiB name: RG_ring_2_76307 00:06:53.836 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:53.836 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:53.836 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:53.836 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:53.836 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:53.836 associated memzone info: size: 0.015991 MiB name: RG_ring_3_76307 00:06:53.836 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:53.836 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:53.836 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:53.836 associated memzone info: size: 0.000183 MiB name: MP_msgpool_76307 00:06:53.836 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:53.836 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_76307 00:06:53.836 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:53.836 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:53.836 20:23:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:53.836 20:23:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 76307 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 76307 ']' 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 76307 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76307 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76307' 00:06:53.836 killing process with pid 76307 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 76307 00:06:53.836 20:23:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 76307 00:06:54.426 00:06:54.427 real 0m1.802s 00:06:54.427 user 0m1.881s 00:06:54.427 sys 0m0.524s 00:06:54.427 20:23:48 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.427 20:23:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:54.427 ************************************ 00:06:54.427 END TEST dpdk_mem_utility 00:06:54.427 ************************************ 00:06:54.427 20:23:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.427 20:23:48 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:54.427 20:23:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.427 20:23:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.427 20:23:48 -- common/autotest_common.sh@10 -- # set +x 00:06:54.427 ************************************ 00:06:54.427 START TEST event 00:06:54.427 ************************************ 00:06:54.427 20:23:48 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:54.427 * Looking for test storage... 00:06:54.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:54.427 20:23:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:54.427 20:23:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:54.427 20:23:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:54.427 20:23:48 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:54.427 20:23:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.427 20:23:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.427 ************************************ 00:06:54.427 START TEST event_perf 00:06:54.427 ************************************ 00:06:54.427 20:23:48 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:54.427 Running I/O for 1 seconds...[2024-07-12 20:23:48.538871] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:54.427 [2024-07-12 20:23:48.539264] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76385 ] 00:06:54.686 [2024-07-12 20:23:48.695747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.686 [2024-07-12 20:23:48.715448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.686 [2024-07-12 20:23:48.827079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.686 [2024-07-12 20:23:48.827302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.686 Running I/O for 1 seconds...[2024-07-12 20:23:48.827346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.686 [2024-07-12 20:23:48.827442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.062 00:06:56.062 lcore 0: 190848 00:06:56.062 lcore 1: 190847 00:06:56.062 lcore 2: 190849 00:06:56.062 lcore 3: 190848 00:06:56.062 done. 00:06:56.062 00:06:56.062 real 0m1.434s 00:06:56.062 user 0m4.178s 00:06:56.062 sys 0m0.130s 00:06:56.062 20:23:49 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.062 20:23:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.062 ************************************ 00:06:56.062 END TEST event_perf 00:06:56.062 ************************************ 00:06:56.062 20:23:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:56.062 20:23:49 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:56.062 20:23:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:56.062 20:23:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.062 20:23:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.063 ************************************ 00:06:56.063 START TEST event_reactor 00:06:56.063 ************************************ 00:06:56.063 20:23:49 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:56.063 [2024-07-12 20:23:50.035811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:56.063 [2024-07-12 20:23:50.036113] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76419 ] 00:06:56.063 [2024-07-12 20:23:50.201747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.321 [2024-07-12 20:23:50.224174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.321 [2024-07-12 20:23:50.322165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.698 test_start 00:06:57.698 oneshot 00:06:57.698 tick 100 00:06:57.698 tick 100 00:06:57.698 tick 250 00:06:57.698 tick 100 00:06:57.698 tick 100 00:06:57.698 tick 250 00:06:57.698 tick 100 00:06:57.698 tick 500 00:06:57.698 tick 100 00:06:57.698 tick 100 00:06:57.698 tick 250 00:06:57.698 tick 100 00:06:57.698 tick 100 00:06:57.698 test_end 00:06:57.698 00:06:57.698 real 0m1.423s 00:06:57.698 user 0m1.197s 00:06:57.698 sys 0m0.117s 00:06:57.698 20:23:51 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.698 ************************************ 00:06:57.698 END TEST event_reactor 00:06:57.698 ************************************ 00:06:57.698 20:23:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:57.698 20:23:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:57.698 20:23:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:57.698 20:23:51 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.698 20:23:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.698 20:23:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.698 ************************************ 00:06:57.698 START TEST event_reactor_perf 00:06:57.698 ************************************ 00:06:57.698 20:23:51 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:57.698 [2024-07-12 20:23:51.499326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:57.698 [2024-07-12 20:23:51.499675] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76455 ] 00:06:57.698 [2024-07-12 20:23:51.651704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.698 [2024-07-12 20:23:51.672824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.698 [2024-07-12 20:23:51.776234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.103 test_start 00:06:59.103 test_end 00:06:59.103 Performance: 270982 events per second 00:06:59.103 00:06:59.103 real 0m1.418s 00:06:59.103 user 0m1.190s 00:06:59.103 sys 0m0.118s 00:06:59.103 20:23:52 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.103 ************************************ 00:06:59.103 END TEST event_reactor_perf 00:06:59.103 ************************************ 00:06:59.103 20:23:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.103 20:23:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:59.103 20:23:52 event -- event/event.sh@49 -- # uname -s 00:06:59.103 20:23:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:59.103 20:23:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:59.103 20:23:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.103 20:23:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.103 20:23:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.103 ************************************ 00:06:59.103 START TEST event_scheduler 00:06:59.103 ************************************ 00:06:59.103 20:23:52 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:59.103 * Looking for test storage... 00:06:59.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:59.103 20:23:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:59.103 20:23:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=76518 00:06:59.103 20:23:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:59.103 20:23:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.103 20:23:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 76518 00:06:59.103 20:23:53 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 76518 ']' 00:06:59.103 20:23:53 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.103 20:23:53 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.103 20:23:53 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.103 20:23:53 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.103 20:23:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.103 [2024-07-12 20:23:53.112279] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:59.103 [2024-07-12 20:23:53.112492] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76518 ] 00:06:59.360 [2024-07-12 20:23:53.265931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.360 [2024-07-12 20:23:53.288391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.360 [2024-07-12 20:23:53.388033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.360 [2024-07-12 20:23:53.388235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.360 [2024-07-12 20:23:53.388182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.360 [2024-07-12 20:23:53.388333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.926 20:23:54 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.926 20:23:54 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:59.926 20:23:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:59.926 20:23:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.926 20:23:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.926 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.927 POWER: Cannot set governor of lcore 0 to userspace 00:06:59.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.927 POWER: Cannot set governor of lcore 0 to performance 00:06:59.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.927 POWER: Cannot set governor of lcore 0 to userspace 00:06:59.927 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.927 POWER: Cannot set governor of lcore 0 to userspace 00:06:59.927 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:59.927 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:59.927 POWER: Unable to set Power Management Environment for lcore 0 00:06:59.927 [2024-07-12 20:23:54.062522] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:59.927 [2024-07-12 20:23:54.062547] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:59.927 [2024-07-12 20:23:54.062592] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:59.927 [2024-07-12 20:23:54.062615] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:59.927 [2024-07-12 20:23:54.062647] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:59.927 [2024-07-12 20:23:54.062661] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:59.927 20:23:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.927 20:23:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:59.927 20:23:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.927 20:23:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 [2024-07-12 20:23:54.157937] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:00.186 20:23:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:00.186 20:23:54 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.186 20:23:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 ************************************ 00:07:00.186 START TEST scheduler_create_thread 00:07:00.186 ************************************ 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 2 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 3 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 4 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 5 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 6 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 7 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 8 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 9 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 10 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.186 20:23:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.091 20:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.091 20:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:02.091 20:23:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:02.091 20:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.091 20:23:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.655 20:23:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.655 00:07:02.655 real 0m2.613s 00:07:02.655 user 0m0.014s 00:07:02.655 sys 0m0.010s 00:07:02.655 20:23:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.655 20:23:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.655 ************************************ 00:07:02.655 END TEST scheduler_create_thread 00:07:02.655 ************************************ 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:02.912 20:23:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:02.912 20:23:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 76518 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 76518 ']' 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 76518 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76518 00:07:02.912 killing process with pid 76518 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76518' 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 76518 00:07:02.912 20:23:56 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 76518 00:07:03.212 [2024-07-12 20:23:57.263343] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:03.470 ************************************ 00:07:03.470 END TEST event_scheduler 00:07:03.470 ************************************ 00:07:03.470 00:07:03.470 real 0m4.598s 00:07:03.470 user 0m8.392s 00:07:03.470 sys 0m0.459s 00:07:03.470 20:23:57 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.470 20:23:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:03.470 20:23:57 event -- common/autotest_common.sh@1142 -- # return 0 00:07:03.470 20:23:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:03.470 20:23:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:03.470 20:23:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.470 20:23:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.470 20:23:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.470 ************************************ 00:07:03.470 START TEST app_repeat 00:07:03.470 ************************************ 00:07:03.470 20:23:57 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:03.470 Process app_repeat pid: 76613 00:07:03.470 spdk_app_start Round 0 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=76613 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 76613' 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.470 20:23:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:03.471 20:23:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76613 /var/tmp/spdk-nbd.sock 00:07:03.471 20:23:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76613 ']' 00:07:03.471 20:23:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.471 20:23:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.471 20:23:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.471 20:23:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.471 20:23:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.729 [2024-07-12 20:23:57.634671] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:03.729 [2024-07-12 20:23:57.634870] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76613 ] 00:07:03.729 [2024-07-12 20:23:57.777660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.729 [2024-07-12 20:23:57.795366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.987 [2024-07-12 20:23:57.896222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.987 [2024-07-12 20:23:57.896288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.553 20:23:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.553 20:23:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:04.553 20:23:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.811 Malloc0 00:07:04.811 20:23:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.068 Malloc1 00:07:05.068 20:23:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.068 20:23:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.634 /dev/nbd0 00:07:05.634 20:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.634 20:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.634 1+0 records in 00:07:05.634 1+0 records out 00:07:05.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437937 s, 9.4 MB/s 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:05.634 20:23:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:05.634 20:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.634 20:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.634 20:23:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.892 /dev/nbd1 00:07:05.892 20:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.892 20:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.892 1+0 records in 00:07:05.892 1+0 records out 00:07:05.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503822 s, 8.1 MB/s 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:05.892 20:23:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:05.892 20:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.892 20:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.892 20:23:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.892 20:23:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.892 20:23:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.150 { 00:07:06.150 "nbd_device": "/dev/nbd0", 00:07:06.150 "bdev_name": "Malloc0" 00:07:06.150 }, 00:07:06.150 { 00:07:06.150 "nbd_device": "/dev/nbd1", 00:07:06.150 "bdev_name": "Malloc1" 00:07:06.150 } 00:07:06.150 ]' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.150 { 00:07:06.150 "nbd_device": "/dev/nbd0", 00:07:06.150 "bdev_name": "Malloc0" 00:07:06.150 }, 00:07:06.150 { 00:07:06.150 "nbd_device": "/dev/nbd1", 00:07:06.150 "bdev_name": "Malloc1" 00:07:06.150 } 00:07:06.150 ]' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.150 /dev/nbd1' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.150 /dev/nbd1' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.150 256+0 records in 00:07:06.150 256+0 records out 00:07:06.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00659655 s, 159 MB/s 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.150 256+0 records in 00:07:06.150 256+0 records out 00:07:06.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283546 s, 37.0 MB/s 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.150 256+0 records in 00:07:06.150 256+0 records out 00:07:06.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307448 s, 34.1 MB/s 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.150 20:24:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.408 20:24:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.409 20:24:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.409 20:24:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.667 20:24:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.924 20:24:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.924 20:24:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.924 20:24:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.184 20:24:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.184 20:24:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.450 20:24:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.450 [2024-07-12 20:24:01.573319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.720 [2024-07-12 20:24:01.662834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.720 [2024-07-12 20:24:01.662841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.720 [2024-07-12 20:24:01.720738] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.720 [2024-07-12 20:24:01.720829] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.249 spdk_app_start Round 1 00:07:10.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.249 20:24:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.249 20:24:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:10.249 20:24:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76613 /var/tmp/spdk-nbd.sock 00:07:10.249 20:24:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76613 ']' 00:07:10.249 20:24:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.249 20:24:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.249 20:24:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.249 20:24:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.249 20:24:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.508 20:24:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.508 20:24:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:10.508 20:24:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.767 Malloc0 00:07:10.767 20:24:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.055 Malloc1 00:07:11.055 20:24:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.055 20:24:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.314 /dev/nbd0 00:07:11.314 20:24:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.314 20:24:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.314 1+0 records in 00:07:11.314 1+0 records out 00:07:11.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971429 s, 4.2 MB/s 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.314 20:24:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:11.314 20:24:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.314 20:24:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.314 20:24:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.574 /dev/nbd1 00:07:11.574 20:24:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.574 20:24:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.574 1+0 records in 00:07:11.574 1+0 records out 00:07:11.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406642 s, 10.1 MB/s 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.574 20:24:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:11.574 20:24:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.574 20:24:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.574 20:24:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.574 20:24:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.574 20:24:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.141 { 00:07:12.141 "nbd_device": "/dev/nbd0", 00:07:12.141 "bdev_name": "Malloc0" 00:07:12.141 }, 00:07:12.141 { 00:07:12.141 "nbd_device": "/dev/nbd1", 00:07:12.141 "bdev_name": "Malloc1" 00:07:12.141 } 00:07:12.141 ]' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.141 { 00:07:12.141 "nbd_device": "/dev/nbd0", 00:07:12.141 "bdev_name": "Malloc0" 00:07:12.141 }, 00:07:12.141 { 00:07:12.141 "nbd_device": "/dev/nbd1", 00:07:12.141 "bdev_name": "Malloc1" 00:07:12.141 } 00:07:12.141 ]' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.141 /dev/nbd1' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.141 /dev/nbd1' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.141 256+0 records in 00:07:12.141 256+0 records out 00:07:12.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676445 s, 155 MB/s 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.141 256+0 records in 00:07:12.141 256+0 records out 00:07:12.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286345 s, 36.6 MB/s 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.141 256+0 records in 00:07:12.141 256+0 records out 00:07:12.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270741 s, 38.7 MB/s 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.141 20:24:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.400 20:24:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.658 20:24:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.917 20:24:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.917 20:24:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.176 20:24:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:13.436 [2024-07-12 20:24:07.407529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.436 [2024-07-12 20:24:07.472016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.436 [2024-07-12 20:24:07.472021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.436 [2024-07-12 20:24:07.531790] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.436 [2024-07-12 20:24:07.531913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.721 spdk_app_start Round 2 00:07:16.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.721 20:24:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.721 20:24:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:16.721 20:24:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76613 /var/tmp/spdk-nbd.sock 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76613 ']' 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.721 20:24:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:16.721 20:24:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.721 Malloc0 00:07:16.721 20:24:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.980 Malloc1 00:07:16.980 20:24:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.980 20:24:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.239 /dev/nbd0 00:07:17.239 20:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.239 20:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.239 1+0 records in 00:07:17.239 1+0 records out 00:07:17.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335038 s, 12.2 MB/s 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.239 20:24:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:17.239 20:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.239 20:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.239 20:24:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.497 /dev/nbd1 00:07:17.497 20:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.497 20:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.497 20:24:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.498 20:24:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.498 1+0 records in 00:07:17.498 1+0 records out 00:07:17.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054323 s, 7.5 MB/s 00:07:17.498 20:24:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.498 20:24:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:17.498 20:24:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.498 20:24:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.498 20:24:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:17.498 20:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.498 20:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.498 20:24:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.498 20:24:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.498 20:24:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.756 { 00:07:17.756 "nbd_device": "/dev/nbd0", 00:07:17.756 "bdev_name": "Malloc0" 00:07:17.756 }, 00:07:17.756 { 00:07:17.756 "nbd_device": "/dev/nbd1", 00:07:17.756 "bdev_name": "Malloc1" 00:07:17.756 } 00:07:17.756 ]' 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.756 { 00:07:17.756 "nbd_device": "/dev/nbd0", 00:07:17.756 "bdev_name": "Malloc0" 00:07:17.756 }, 00:07:17.756 { 00:07:17.756 "nbd_device": "/dev/nbd1", 00:07:17.756 "bdev_name": "Malloc1" 00:07:17.756 } 00:07:17.756 ]' 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.756 /dev/nbd1' 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.756 /dev/nbd1' 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:17.756 20:24:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.015 256+0 records in 00:07:18.015 256+0 records out 00:07:18.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677059 s, 155 MB/s 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.015 256+0 records in 00:07:18.015 256+0 records out 00:07:18.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290513 s, 36.1 MB/s 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.015 256+0 records in 00:07:18.015 256+0 records out 00:07:18.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343347 s, 30.5 MB/s 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.015 20:24:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.015 20:24:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.274 20:24:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.533 20:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.791 20:24:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.792 20:24:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.050 20:24:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.308 [2024-07-12 20:24:13.316118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.309 [2024-07-12 20:24:13.398028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.309 [2024-07-12 20:24:13.398040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.567 [2024-07-12 20:24:13.457701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.567 [2024-07-12 20:24:13.457799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.098 20:24:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 76613 /var/tmp/spdk-nbd.sock 00:07:22.098 20:24:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76613 ']' 00:07:22.098 20:24:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.098 20:24:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.098 20:24:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.098 20:24:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.098 20:24:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:22.356 20:24:16 event.app_repeat -- event/event.sh@39 -- # killprocess 76613 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 76613 ']' 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 76613 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76613 00:07:22.356 killing process with pid 76613 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76613' 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@967 -- # kill 76613 00:07:22.356 20:24:16 event.app_repeat -- common/autotest_common.sh@972 -- # wait 76613 00:07:22.614 spdk_app_start is called in Round 0. 00:07:22.614 Shutdown signal received, stop current app iteration 00:07:22.614 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:07:22.614 spdk_app_start is called in Round 1. 00:07:22.614 Shutdown signal received, stop current app iteration 00:07:22.614 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:07:22.614 spdk_app_start is called in Round 2. 00:07:22.614 Shutdown signal received, stop current app iteration 00:07:22.614 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:07:22.614 spdk_app_start is called in Round 3. 00:07:22.614 Shutdown signal received, stop current app iteration 00:07:22.614 20:24:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:22.614 20:24:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:22.614 00:07:22.614 real 0m19.046s 00:07:22.614 user 0m42.587s 00:07:22.614 sys 0m2.924s 00:07:22.614 20:24:16 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.614 20:24:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.614 ************************************ 00:07:22.614 END TEST app_repeat 00:07:22.614 ************************************ 00:07:22.614 20:24:16 event -- common/autotest_common.sh@1142 -- # return 0 00:07:22.614 20:24:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:22.614 20:24:16 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:22.614 20:24:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.614 20:24:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.614 20:24:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.614 ************************************ 00:07:22.614 START TEST cpu_locks 00:07:22.614 ************************************ 00:07:22.614 20:24:16 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:22.614 * Looking for test storage... 00:07:22.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:22.614 20:24:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:22.614 20:24:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:22.614 20:24:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:22.614 20:24:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:22.614 20:24:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.614 20:24:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.614 20:24:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.872 ************************************ 00:07:22.872 START TEST default_locks 00:07:22.872 ************************************ 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=77053 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 77053 00:07:22.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 77053 ']' 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.872 20:24:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.872 [2024-07-12 20:24:16.870645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:22.872 [2024-07-12 20:24:16.870832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77053 ] 00:07:22.872 [2024-07-12 20:24:17.014336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.130 [2024-07-12 20:24:17.036671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.130 [2024-07-12 20:24:17.123486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.695 20:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.695 20:24:17 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:23.695 20:24:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 77053 00:07:23.695 20:24:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 77053 00:07:23.695 20:24:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 77053 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 77053 ']' 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 77053 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77053 00:07:24.261 killing process with pid 77053 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77053' 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 77053 00:07:24.261 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 77053 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 77053 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 77053 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 77053 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 77053 ']' 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.520 ERROR: process (pid: 77053) is no longer running 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (77053) - No such process 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:24.520 00:07:24.520 real 0m1.842s 00:07:24.520 user 0m1.853s 00:07:24.520 sys 0m0.606s 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.520 ************************************ 00:07:24.520 END TEST default_locks 00:07:24.520 ************************************ 00:07:24.520 20:24:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.520 20:24:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:24.520 20:24:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:24.520 20:24:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.520 20:24:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.520 20:24:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.520 ************************************ 00:07:24.520 START TEST default_locks_via_rpc 00:07:24.520 ************************************ 00:07:24.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.520 20:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:24.520 20:24:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=77106 00:07:24.520 20:24:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 77106 00:07:24.520 20:24:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.521 20:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77106 ']' 00:07:24.521 20:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.521 20:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.521 20:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.521 20:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.521 20:24:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.779 [2024-07-12 20:24:18.779506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:24.779 [2024-07-12 20:24:18.779699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77106 ] 00:07:25.037 [2024-07-12 20:24:18.931343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.037 [2024-07-12 20:24:18.952118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.037 [2024-07-12 20:24:19.018226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 77106 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.603 20:24:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 77106 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 77106 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 77106 ']' 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 77106 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77106 00:07:26.170 killing process with pid 77106 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77106' 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 77106 00:07:26.170 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 77106 00:07:26.738 ************************************ 00:07:26.738 END TEST default_locks_via_rpc 00:07:26.738 ************************************ 00:07:26.738 00:07:26.738 real 0m1.956s 00:07:26.738 user 0m2.019s 00:07:26.738 sys 0m0.646s 00:07:26.738 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.738 20:24:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.738 20:24:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:26.738 20:24:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:26.738 20:24:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.738 20:24:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.738 20:24:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.738 ************************************ 00:07:26.738 START TEST non_locking_app_on_locked_coremask 00:07:26.738 ************************************ 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=77156 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 77156 /var/tmp/spdk.sock 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77156 ']' 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.738 20:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.738 [2024-07-12 20:24:20.788385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:26.738 [2024-07-12 20:24:20.788599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77156 ] 00:07:26.996 [2024-07-12 20:24:20.942854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.996 [2024-07-12 20:24:20.962209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.996 [2024-07-12 20:24:21.059066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=77169 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 77169 /var/tmp/spdk2.sock 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77169 ']' 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.563 20:24:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.821 [2024-07-12 20:24:21.814079] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:27.821 [2024-07-12 20:24:21.814801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77169 ] 00:07:28.079 [2024-07-12 20:24:21.972132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.079 [2024-07-12 20:24:22.002556] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.079 [2024-07-12 20:24:22.002622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.079 [2024-07-12 20:24:22.175765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.644 20:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.644 20:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:28.644 20:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 77156 00:07:28.644 20:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 77156 00:07:28.644 20:24:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 77156 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77156 ']' 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 77156 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77156 00:07:29.577 killing process with pid 77156 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77156' 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 77156 00:07:29.577 20:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 77156 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 77169 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77169 ']' 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 77169 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77169 00:07:30.512 killing process with pid 77169 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77169' 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 77169 00:07:30.512 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 77169 00:07:30.770 00:07:30.770 real 0m4.242s 00:07:30.770 user 0m4.541s 00:07:30.770 sys 0m1.336s 00:07:30.770 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.770 20:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.770 ************************************ 00:07:30.770 END TEST non_locking_app_on_locked_coremask 00:07:30.770 ************************************ 00:07:31.029 20:24:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:31.029 20:24:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:31.029 20:24:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.029 20:24:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.029 20:24:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.029 ************************************ 00:07:31.029 START TEST locking_app_on_unlocked_coremask 00:07:31.029 ************************************ 00:07:31.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=77243 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 77243 /var/tmp/spdk.sock 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77243 ']' 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.029 20:24:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.029 [2024-07-12 20:24:25.091246] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:31.029 [2024-07-12 20:24:25.091450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77243 ] 00:07:31.287 [2024-07-12 20:24:25.235575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.287 [2024-07-12 20:24:25.254595] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.287 [2024-07-12 20:24:25.254650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.287 [2024-07-12 20:24:25.355755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=77259 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 77259 /var/tmp/spdk2.sock 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77259 ']' 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.221 20:24:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.221 [2024-07-12 20:24:26.097414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:32.221 [2024-07-12 20:24:26.097866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77259 ] 00:07:32.222 [2024-07-12 20:24:26.242115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.222 [2024-07-12 20:24:26.268147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.479 [2024-07-12 20:24:26.460872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.046 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.046 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:33.046 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 77259 00:07:33.046 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 77259 00:07:33.046 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.611 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 77243 00:07:33.611 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77243 ']' 00:07:33.611 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 77243 00:07:33.611 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:33.611 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.611 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77243 00:07:33.869 killing process with pid 77243 00:07:33.869 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.869 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.869 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77243' 00:07:33.869 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 77243 00:07:33.869 20:24:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 77243 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 77259 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77259 ']' 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 77259 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77259 00:07:34.804 killing process with pid 77259 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77259' 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 77259 00:07:34.804 20:24:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 77259 00:07:35.071 ************************************ 00:07:35.072 END TEST locking_app_on_unlocked_coremask 00:07:35.072 ************************************ 00:07:35.072 00:07:35.072 real 0m4.206s 00:07:35.072 user 0m4.522s 00:07:35.072 sys 0m1.245s 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.072 20:24:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:35.072 20:24:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:35.072 20:24:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.072 20:24:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.072 20:24:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.072 ************************************ 00:07:35.072 START TEST locking_app_on_locked_coremask 00:07:35.072 ************************************ 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=77328 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 77328 /var/tmp/spdk.sock 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77328 ']' 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.072 20:24:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.330 [2024-07-12 20:24:29.332366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:35.330 [2024-07-12 20:24:29.332552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77328 ] 00:07:35.590 [2024-07-12 20:24:29.485663] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.590 [2024-07-12 20:24:29.507833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.590 [2024-07-12 20:24:29.602557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=77349 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 77349 /var/tmp/spdk2.sock 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 77349 /var/tmp/spdk2.sock 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 77349 /var/tmp/spdk2.sock 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77349 ']' 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.157 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.416 [2024-07-12 20:24:30.352116] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:36.416 [2024-07-12 20:24:30.352574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77349 ] 00:07:36.416 [2024-07-12 20:24:30.500091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.416 [2024-07-12 20:24:30.526779] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 77328 has claimed it. 00:07:36.416 [2024-07-12 20:24:30.526869] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:36.982 ERROR: process (pid: 77349) is no longer running 00:07:36.982 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (77349) - No such process 00:07:36.982 20:24:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 77328 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 77328 00:07:36.982 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 77328 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77328 ']' 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 77328 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77328 00:07:37.549 killing process with pid 77328 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77328' 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 77328 00:07:37.549 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 77328 00:07:37.807 00:07:37.807 real 0m2.688s 00:07:37.807 user 0m2.993s 00:07:37.807 sys 0m0.798s 00:07:37.807 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.807 ************************************ 00:07:37.807 END TEST locking_app_on_locked_coremask 00:07:37.807 ************************************ 00:07:37.807 20:24:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.807 20:24:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:37.807 20:24:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:37.808 20:24:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.808 20:24:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.808 20:24:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.808 ************************************ 00:07:37.808 START TEST locking_overlapped_coremask 00:07:37.808 ************************************ 00:07:37.808 20:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:38.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=77392 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 77392 /var/tmp/spdk.sock 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 77392 ']' 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.066 20:24:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.066 [2024-07-12 20:24:32.070429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:38.066 [2024-07-12 20:24:32.070654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77392 ] 00:07:38.325 [2024-07-12 20:24:32.223514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.325 [2024-07-12 20:24:32.245337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.325 [2024-07-12 20:24:32.336850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.325 [2024-07-12 20:24:32.336935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.325 [2024-07-12 20:24:32.336975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=77410 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 77410 /var/tmp/spdk2.sock 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 77410 /var/tmp/spdk2.sock 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 77410 /var/tmp/spdk2.sock 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 77410 ']' 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.907 20:24:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.179 [2024-07-12 20:24:33.084230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:39.179 [2024-07-12 20:24:33.084489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77410 ] 00:07:39.179 [2024-07-12 20:24:33.246345] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.179 [2024-07-12 20:24:33.270804] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 77392 has claimed it. 00:07:39.179 [2024-07-12 20:24:33.270901] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:39.746 ERROR: process (pid: 77410) is no longer running 00:07:39.746 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (77410) - No such process 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:39.746 20:24:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 77392 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 77392 ']' 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 77392 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77392 00:07:39.747 killing process with pid 77392 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77392' 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 77392 00:07:39.747 20:24:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 77392 00:07:40.315 00:07:40.315 real 0m2.276s 00:07:40.315 user 0m6.016s 00:07:40.315 sys 0m0.604s 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.315 ************************************ 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.315 END TEST locking_overlapped_coremask 00:07:40.315 ************************************ 00:07:40.315 20:24:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:40.315 20:24:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:40.315 20:24:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.315 20:24:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.315 20:24:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.315 ************************************ 00:07:40.315 START TEST locking_overlapped_coremask_via_rpc 00:07:40.315 ************************************ 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=77456 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 77456 /var/tmp/spdk.sock 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77456 ']' 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.315 20:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.315 [2024-07-12 20:24:34.393697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:40.315 [2024-07-12 20:24:34.393872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77456 ] 00:07:40.574 [2024-07-12 20:24:34.538552] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.574 [2024-07-12 20:24:34.557671] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.574 [2024-07-12 20:24:34.557922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.574 [2024-07-12 20:24:34.660289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.575 [2024-07-12 20:24:34.660367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.575 [2024-07-12 20:24:34.660437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=77470 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 77470 /var/tmp/spdk2.sock 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77470 ']' 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.511 20:24:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.511 [2024-07-12 20:24:35.412964] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:41.511 [2024-07-12 20:24:35.413161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77470 ] 00:07:41.511 [2024-07-12 20:24:35.573192] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.511 [2024-07-12 20:24:35.598548] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.511 [2024-07-12 20:24:35.598632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.770 [2024-07-12 20:24:35.791016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.770 [2024-07-12 20:24:35.794365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.770 [2024-07-12 20:24:35.794411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.337 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.338 [2024-07-12 20:24:36.421457] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 77456 has claimed it. 00:07:42.338 request: 00:07:42.338 { 00:07:42.338 "method": "framework_enable_cpumask_locks", 00:07:42.338 "req_id": 1 00:07:42.338 } 00:07:42.338 Got JSON-RPC error response 00:07:42.338 response: 00:07:42.338 { 00:07:42.338 "code": -32603, 00:07:42.338 "message": "Failed to claim CPU core: 2" 00:07:42.338 } 00:07:42.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 77456 /var/tmp/spdk.sock 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77456 ']' 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.338 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 77470 /var/tmp/spdk2.sock 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77470 ']' 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.597 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:42.856 ************************************ 00:07:42.856 END TEST locking_overlapped_coremask_via_rpc 00:07:42.856 ************************************ 00:07:42.856 00:07:42.856 real 0m2.686s 00:07:42.856 user 0m1.411s 00:07:42.856 sys 0m0.188s 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.856 20:24:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:43.115 20:24:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:43.115 20:24:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 77456 ]] 00:07:43.115 20:24:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 77456 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77456 ']' 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77456 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77456 00:07:43.115 killing process with pid 77456 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77456' 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 77456 00:07:43.115 20:24:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 77456 00:07:43.374 20:24:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 77470 ]] 00:07:43.374 20:24:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 77470 00:07:43.374 20:24:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77470 ']' 00:07:43.374 20:24:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77470 00:07:43.374 20:24:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:43.374 20:24:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.374 20:24:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77470 00:07:43.642 killing process with pid 77470 00:07:43.642 20:24:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:43.642 20:24:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:43.642 20:24:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77470' 00:07:43.642 20:24:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 77470 00:07:43.642 20:24:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 77470 00:07:43.903 20:24:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:43.903 Process with pid 77456 is not found 00:07:43.903 Process with pid 77470 is not found 00:07:43.903 20:24:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:43.903 20:24:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 77456 ]] 00:07:43.903 20:24:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 77456 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77456 ']' 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77456 00:07:43.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (77456) - No such process 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 77456 is not found' 00:07:43.903 20:24:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 77470 ]] 00:07:43.903 20:24:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 77470 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77470 ']' 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77470 00:07:43.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (77470) - No such process 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 77470 is not found' 00:07:43.903 20:24:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:43.903 ************************************ 00:07:43.903 END TEST cpu_locks 00:07:43.903 ************************************ 00:07:43.903 00:07:43.903 real 0m21.317s 00:07:43.903 user 0m36.074s 00:07:43.903 sys 0m6.463s 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.903 20:24:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.903 20:24:38 event -- common/autotest_common.sh@1142 -- # return 0 00:07:43.903 00:07:43.903 real 0m49.626s 00:07:43.903 user 1m33.750s 00:07:43.903 sys 0m10.441s 00:07:43.903 20:24:38 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.903 ************************************ 00:07:43.903 END TEST event 00:07:43.903 ************************************ 00:07:43.903 20:24:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.160 20:24:38 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.160 20:24:38 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:44.160 20:24:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.160 20:24:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.160 20:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:44.160 ************************************ 00:07:44.160 START TEST thread 00:07:44.160 ************************************ 00:07:44.160 20:24:38 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:44.160 * Looking for test storage... 00:07:44.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:44.160 20:24:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:44.160 20:24:38 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:44.160 20:24:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.160 20:24:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.160 ************************************ 00:07:44.160 START TEST thread_poller_perf 00:07:44.160 ************************************ 00:07:44.160 20:24:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:44.160 [2024-07-12 20:24:38.212845] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:44.160 [2024-07-12 20:24:38.213010] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77600 ] 00:07:44.417 [2024-07-12 20:24:38.354538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.417 [2024-07-12 20:24:38.379846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.417 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:44.417 [2024-07-12 20:24:38.475447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.790 ====================================== 00:07:45.790 busy:2214997509 (cyc) 00:07:45.790 total_run_count: 286000 00:07:45.790 tsc_hz: 2200000000 (cyc) 00:07:45.790 ====================================== 00:07:45.790 poller_cost: 7744 (cyc), 3520 (nsec) 00:07:45.790 00:07:45.790 real 0m1.401s 00:07:45.790 user 0m1.196s 00:07:45.790 sys 0m0.097s 00:07:45.790 20:24:39 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.790 20:24:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:45.790 ************************************ 00:07:45.790 END TEST thread_poller_perf 00:07:45.790 ************************************ 00:07:45.790 20:24:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:45.790 20:24:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:45.790 20:24:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:45.790 20:24:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.790 20:24:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.790 ************************************ 00:07:45.790 START TEST thread_poller_perf 00:07:45.790 ************************************ 00:07:45.790 20:24:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:45.790 [2024-07-12 20:24:39.685486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:45.790 [2024-07-12 20:24:39.685689] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77642 ] 00:07:45.790 [2024-07-12 20:24:39.837194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.790 [2024-07-12 20:24:39.859376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.048 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:46.048 [2024-07-12 20:24:39.955476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.984 ====================================== 00:07:46.984 busy:2204125254 (cyc) 00:07:46.984 total_run_count: 3760000 00:07:46.984 tsc_hz: 2200000000 (cyc) 00:07:46.984 ====================================== 00:07:46.984 poller_cost: 586 (cyc), 266 (nsec) 00:07:46.984 00:07:46.984 real 0m1.413s 00:07:46.984 user 0m1.200s 00:07:46.984 sys 0m0.105s 00:07:46.984 20:24:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.984 ************************************ 00:07:46.984 END TEST thread_poller_perf 00:07:46.984 ************************************ 00:07:46.984 20:24:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.984 20:24:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:46.984 20:24:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:46.984 00:07:46.984 real 0m2.997s 00:07:46.984 user 0m2.471s 00:07:46.984 sys 0m0.303s 00:07:46.984 20:24:41 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.984 20:24:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:46.984 ************************************ 00:07:46.984 END TEST thread 00:07:46.984 ************************************ 00:07:46.984 20:24:41 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.984 20:24:41 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:46.984 20:24:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.984 20:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.984 20:24:41 -- common/autotest_common.sh@10 -- # set +x 00:07:46.984 ************************************ 00:07:46.984 START TEST accel 00:07:46.984 ************************************ 00:07:46.984 20:24:41 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:47.243 * Looking for test storage... 00:07:47.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:47.243 20:24:41 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:47.243 20:24:41 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:47.243 20:24:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:47.243 20:24:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=77712 00:07:47.243 20:24:41 accel -- accel/accel.sh@63 -- # waitforlisten 77712 00:07:47.243 20:24:41 accel -- common/autotest_common.sh@829 -- # '[' -z 77712 ']' 00:07:47.243 20:24:41 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.243 20:24:41 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.243 20:24:41 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.243 20:24:41 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.243 20:24:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.243 20:24:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:47.243 20:24:41 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:47.243 20:24:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.243 20:24:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.243 20:24:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.243 20:24:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.243 20:24:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.243 20:24:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:47.243 20:24:41 accel -- accel/accel.sh@41 -- # jq -r . 00:07:47.243 [2024-07-12 20:24:41.304224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:47.243 [2024-07-12 20:24:41.304427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77712 ] 00:07:47.502 [2024-07-12 20:24:41.446782] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.502 [2024-07-12 20:24:41.466105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.502 [2024-07-12 20:24:41.559255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.069 20:24:42 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.069 20:24:42 accel -- common/autotest_common.sh@862 -- # return 0 00:07:48.069 20:24:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:48.069 20:24:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:48.069 20:24:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:48.069 20:24:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:48.069 20:24:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:48.069 20:24:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:48.069 20:24:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.069 20:24:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.069 20:24:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:48.329 20:24:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:48.329 20:24:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:48.329 20:24:42 accel -- accel/accel.sh@75 -- # killprocess 77712 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@948 -- # '[' -z 77712 ']' 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@952 -- # kill -0 77712 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@953 -- # uname 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77712 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.329 killing process with pid 77712 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77712' 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@967 -- # kill 77712 00:07:48.329 20:24:42 accel -- common/autotest_common.sh@972 -- # wait 77712 00:07:48.896 20:24:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:48.896 20:24:42 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:48.896 20:24:42 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.896 20:24:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.896 20:24:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.896 20:24:42 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:48.896 20:24:42 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:48.896 20:24:42 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.896 20:24:42 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:48.896 20:24:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.896 20:24:42 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:48.896 20:24:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:48.896 20:24:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.896 20:24:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.896 ************************************ 00:07:48.896 START TEST accel_missing_filename 00:07:48.896 ************************************ 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.896 20:24:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:48.896 20:24:42 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:48.896 [2024-07-12 20:24:42.914431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:48.896 [2024-07-12 20:24:42.914623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77771 ] 00:07:49.155 [2024-07-12 20:24:43.064710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.155 [2024-07-12 20:24:43.086795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.155 [2024-07-12 20:24:43.180167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.155 [2024-07-12 20:24:43.238367] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.413 [2024-07-12 20:24:43.323297] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:49.413 A filename is required. 00:07:49.413 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:49.414 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:49.414 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:49.414 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:49.414 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:49.414 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:49.414 00:07:49.414 real 0m0.558s 00:07:49.414 user 0m0.327s 00:07:49.414 sys 0m0.167s 00:07:49.414 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.414 ************************************ 00:07:49.414 END TEST accel_missing_filename 00:07:49.414 ************************************ 00:07:49.414 20:24:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:49.414 20:24:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.414 20:24:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.414 20:24:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:49.414 20:24:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.414 20:24:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.414 ************************************ 00:07:49.414 START TEST accel_compress_verify 00:07:49.414 ************************************ 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.414 20:24:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:49.414 20:24:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:49.414 [2024-07-12 20:24:43.528220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:49.414 [2024-07-12 20:24:43.528478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77791 ] 00:07:49.672 [2024-07-12 20:24:43.674760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.672 [2024-07-12 20:24:43.693068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.672 [2024-07-12 20:24:43.790317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.929 [2024-07-12 20:24:43.848990] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.929 [2024-07-12 20:24:43.933415] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:49.929 00:07:49.929 Compression does not support the verify option, aborting. 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:49.929 00:07:49.929 real 0m0.560s 00:07:49.929 user 0m0.351s 00:07:49.929 sys 0m0.168s 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.929 20:24:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:49.929 ************************************ 00:07:49.929 END TEST accel_compress_verify 00:07:49.929 ************************************ 00:07:49.929 20:24:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.929 20:24:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:49.929 20:24:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.929 20:24:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.929 20:24:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.187 ************************************ 00:07:50.187 START TEST accel_wrong_workload 00:07:50.187 ************************************ 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.187 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:50.187 20:24:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:50.187 Unsupported workload type: foobar 00:07:50.187 [2024-07-12 20:24:44.128151] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:50.188 accel_perf options: 00:07:50.188 [-h help message] 00:07:50.188 [-q queue depth per core] 00:07:50.188 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:50.188 [-T number of threads per core 00:07:50.188 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:50.188 [-t time in seconds] 00:07:50.188 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:50.188 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:50.188 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:50.188 [-l for compress/decompress workloads, name of uncompressed input file 00:07:50.188 [-S for crc32c workload, use this seed value (default 0) 00:07:50.188 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:50.188 [-f for fill workload, use this BYTE value (default 255) 00:07:50.188 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:50.188 [-y verify result if this switch is on] 00:07:50.188 [-a tasks to allocate per core (default: same value as -q)] 00:07:50.188 Can be used to spread operations across a wider range of memory. 00:07:50.188 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:50.188 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.188 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.188 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.188 00:07:50.188 real 0m0.077s 00:07:50.188 user 0m0.071s 00:07:50.188 sys 0m0.048s 00:07:50.188 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.188 20:24:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:50.188 ************************************ 00:07:50.188 END TEST accel_wrong_workload 00:07:50.188 ************************************ 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.188 20:24:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.188 ************************************ 00:07:50.188 START TEST accel_negative_buffers 00:07:50.188 ************************************ 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:50.188 20:24:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:50.188 -x option must be non-negative. 00:07:50.188 [2024-07-12 20:24:44.236407] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:50.188 accel_perf options: 00:07:50.188 [-h help message] 00:07:50.188 [-q queue depth per core] 00:07:50.188 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:50.188 [-T number of threads per core 00:07:50.188 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:50.188 [-t time in seconds] 00:07:50.188 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:50.188 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:50.188 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:50.188 [-l for compress/decompress workloads, name of uncompressed input file 00:07:50.188 [-S for crc32c workload, use this seed value (default 0) 00:07:50.188 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:50.188 [-f for fill workload, use this BYTE value (default 255) 00:07:50.188 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:50.188 [-y verify result if this switch is on] 00:07:50.188 [-a tasks to allocate per core (default: same value as -q)] 00:07:50.188 Can be used to spread operations across a wider range of memory. 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.188 00:07:50.188 real 0m0.063s 00:07:50.188 user 0m0.081s 00:07:50.188 sys 0m0.031s 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.188 20:24:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:50.188 ************************************ 00:07:50.188 END TEST accel_negative_buffers 00:07:50.188 ************************************ 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.188 20:24:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.188 20:24:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.188 ************************************ 00:07:50.188 START TEST accel_crc32c 00:07:50.188 ************************************ 00:07:50.188 20:24:44 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:50.188 20:24:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:50.447 [2024-07-12 20:24:44.365204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:50.447 [2024-07-12 20:24:44.365462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77858 ] 00:07:50.447 [2024-07-12 20:24:44.534316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.447 [2024-07-12 20:24:44.552495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.705 [2024-07-12 20:24:44.645847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.705 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.706 20:24:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:52.082 20:24:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.082 00:07:52.082 real 0m1.578s 00:07:52.082 user 0m1.290s 00:07:52.082 sys 0m0.190s 00:07:52.082 20:24:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.082 20:24:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:52.082 ************************************ 00:07:52.082 END TEST accel_crc32c 00:07:52.082 ************************************ 00:07:52.082 20:24:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.082 20:24:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:52.082 20:24:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:52.082 20:24:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.082 20:24:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.082 ************************************ 00:07:52.082 START TEST accel_crc32c_C2 00:07:52.082 ************************************ 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.082 20:24:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:52.082 [2024-07-12 20:24:45.996851] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:52.082 [2024-07-12 20:24:45.997109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77899 ] 00:07:52.082 [2024-07-12 20:24:46.148501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.082 [2024-07-12 20:24:46.170227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.341 [2024-07-12 20:24:46.253172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.341 20:24:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.717 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.718 00:07:53.718 real 0m1.563s 00:07:53.718 user 0m0.017s 00:07:53.718 sys 0m0.002s 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.718 ************************************ 00:07:53.718 END TEST accel_crc32c_C2 00:07:53.718 20:24:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 ************************************ 00:07:53.718 20:24:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.718 20:24:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:53.718 20:24:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:53.718 20:24:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.718 20:24:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 ************************************ 00:07:53.718 START TEST accel_copy 00:07:53.718 ************************************ 00:07:53.718 20:24:47 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:53.718 20:24:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:53.718 [2024-07-12 20:24:47.592145] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:53.718 [2024-07-12 20:24:47.592340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77929 ] 00:07:53.718 [2024-07-12 20:24:47.742288] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.718 [2024-07-12 20:24:47.761637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.718 [2024-07-12 20:24:47.844925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 20:24:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:55.352 20:24:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.352 00:07:55.352 real 0m1.546s 00:07:55.352 user 0m0.015s 00:07:55.352 sys 0m0.004s 00:07:55.352 20:24:49 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.352 20:24:49 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.352 ************************************ 00:07:55.352 END TEST accel_copy 00:07:55.352 ************************************ 00:07:55.352 20:24:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.352 20:24:49 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.352 20:24:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:55.352 20:24:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.352 20:24:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.352 ************************************ 00:07:55.352 START TEST accel_fill 00:07:55.352 ************************************ 00:07:55.352 20:24:49 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:55.352 20:24:49 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:55.352 [2024-07-12 20:24:49.186330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:55.352 [2024-07-12 20:24:49.186529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77970 ] 00:07:55.352 [2024-07-12 20:24:49.337771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.352 [2024-07-12 20:24:49.358986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.352 [2024-07-12 20:24:49.449390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:55.611 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 20:24:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.611 20:24:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.611 20:24:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.611 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.611 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:56.612 20:24:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.612 00:07:56.612 real 0m1.556s 00:07:56.612 user 0m0.016s 00:07:56.612 sys 0m0.005s 00:07:56.612 20:24:50 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.612 ************************************ 00:07:56.612 20:24:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:56.612 END TEST accel_fill 00:07:56.612 ************************************ 00:07:56.612 20:24:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.612 20:24:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:56.612 20:24:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:56.612 20:24:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.612 20:24:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.612 ************************************ 00:07:56.612 START TEST accel_copy_crc32c 00:07:56.612 ************************************ 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:56.612 20:24:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:56.870 [2024-07-12 20:24:50.790005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:56.870 [2024-07-12 20:24:50.790202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78006 ] 00:07:56.870 [2024-07-12 20:24:50.941582] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.870 [2024-07-12 20:24:50.965879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.128 [2024-07-12 20:24:51.070550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.128 20:24:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.688 00:07:58.688 real 0m1.583s 00:07:58.688 user 0m1.317s 00:07:58.688 sys 0m0.170s 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.688 20:24:52 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:58.688 ************************************ 00:07:58.688 END TEST accel_copy_crc32c 00:07:58.688 ************************************ 00:07:58.688 20:24:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.688 20:24:52 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:58.688 20:24:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:58.688 20:24:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.688 20:24:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.688 ************************************ 00:07:58.688 START TEST accel_copy_crc32c_C2 00:07:58.688 ************************************ 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:58.688 [2024-07-12 20:24:52.433733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:58.688 [2024-07-12 20:24:52.433979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78041 ] 00:07:58.688 [2024-07-12 20:24:52.576380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.688 [2024-07-12 20:24:52.591604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.688 [2024-07-12 20:24:52.688353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:58.688 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 20:24:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.135 00:08:00.135 real 0m1.560s 00:08:00.135 user 0m1.309s 00:08:00.135 sys 0m0.164s 00:08:00.135 ************************************ 00:08:00.135 END TEST accel_copy_crc32c_C2 00:08:00.135 ************************************ 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.135 20:24:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 20:24:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.135 20:24:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:00.135 20:24:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:00.135 20:24:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.135 20:24:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.135 ************************************ 00:08:00.135 START TEST accel_dualcast 00:08:00.135 ************************************ 00:08:00.135 20:24:53 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:00.135 20:24:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:00.135 [2024-07-12 20:24:54.023389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:00.135 [2024-07-12 20:24:54.023561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78077 ] 00:08:00.135 [2024-07-12 20:24:54.165639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.135 [2024-07-12 20:24:54.185022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.419 [2024-07-12 20:24:54.282422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.419 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:00.420 20:24:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:01.376 20:24:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.635 20:24:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.635 20:24:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:01.635 20:24:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.635 00:08:01.635 real 0m1.551s 00:08:01.635 user 0m1.297s 00:08:01.635 sys 0m0.156s 00:08:01.635 20:24:55 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.635 ************************************ 00:08:01.635 END TEST accel_dualcast 00:08:01.635 ************************************ 00:08:01.635 20:24:55 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:01.635 20:24:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.635 20:24:55 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:01.635 20:24:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:01.635 20:24:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.635 20:24:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.635 ************************************ 00:08:01.635 START TEST accel_compare 00:08:01.635 ************************************ 00:08:01.635 20:24:55 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:01.635 20:24:55 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:01.635 [2024-07-12 20:24:55.631236] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:01.635 [2024-07-12 20:24:55.631506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78112 ] 00:08:01.635 [2024-07-12 20:24:55.782898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.893 [2024-07-12 20:24:55.804612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.893 [2024-07-12 20:24:55.903761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.893 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:01.894 20:24:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:03.277 20:24:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.277 00:08:03.277 real 0m1.579s 00:08:03.277 user 0m0.016s 00:08:03.277 sys 0m0.002s 00:08:03.277 20:24:57 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.277 ************************************ 00:08:03.277 END TEST accel_compare 00:08:03.277 ************************************ 00:08:03.277 20:24:57 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:03.277 20:24:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.277 20:24:57 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:03.277 20:24:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:03.277 20:24:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.277 20:24:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.277 ************************************ 00:08:03.277 START TEST accel_xor 00:08:03.277 ************************************ 00:08:03.277 20:24:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:03.277 20:24:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:03.277 [2024-07-12 20:24:57.258019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:03.277 [2024-07-12 20:24:57.258318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78148 ] 00:08:03.277 [2024-07-12 20:24:57.410591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.536 [2024-07-12 20:24:57.432432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.536 [2024-07-12 20:24:57.539037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.536 20:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.537 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.537 20:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 ************************************ 00:08:04.916 END TEST accel_xor 00:08:04.916 ************************************ 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.916 00:08:04.916 real 0m1.591s 00:08:04.916 user 0m1.311s 00:08:04.916 sys 0m0.188s 00:08:04.916 20:24:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.916 20:24:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:04.916 20:24:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.916 20:24:58 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:04.916 20:24:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:04.916 20:24:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.916 20:24:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.916 ************************************ 00:08:04.916 START TEST accel_xor 00:08:04.916 ************************************ 00:08:04.916 20:24:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:04.916 20:24:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:04.916 [2024-07-12 20:24:58.905140] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:04.916 [2024-07-12 20:24:58.905402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78189 ] 00:08:04.916 [2024-07-12 20:24:59.057554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.175 [2024-07-12 20:24:59.078762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.175 [2024-07-12 20:24:59.177989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.175 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.176 20:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:06.552 20:25:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.552 00:08:06.552 real 0m1.582s 00:08:06.552 user 0m1.309s 00:08:06.552 sys 0m0.178s 00:08:06.552 20:25:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.552 ************************************ 00:08:06.552 END TEST accel_xor 00:08:06.552 ************************************ 00:08:06.552 20:25:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:06.552 20:25:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.552 20:25:00 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:06.552 20:25:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:06.552 20:25:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.552 20:25:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.552 ************************************ 00:08:06.552 START TEST accel_dif_verify 00:08:06.552 ************************************ 00:08:06.552 20:25:00 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:06.552 20:25:00 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:06.552 [2024-07-12 20:25:00.523045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:06.552 [2024-07-12 20:25:00.523222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78219 ] 00:08:06.553 [2024-07-12 20:25:00.665374] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.553 [2024-07-12 20:25:00.685067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.860 [2024-07-12 20:25:00.775212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.860 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.860 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.860 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.860 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.860 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.860 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.861 20:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.237 ************************************ 00:08:08.237 END TEST accel_dif_verify 00:08:08.237 ************************************ 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:08.237 20:25:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.237 00:08:08.237 real 0m1.553s 00:08:08.237 user 0m1.288s 00:08:08.237 sys 0m0.171s 00:08:08.237 20:25:02 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.237 20:25:02 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:08.237 20:25:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.237 20:25:02 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:08.237 20:25:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:08.237 20:25:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.237 20:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.237 ************************************ 00:08:08.237 START TEST accel_dif_generate 00:08:08.237 ************************************ 00:08:08.237 20:25:02 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:08.237 20:25:02 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:08.237 [2024-07-12 20:25:02.147425] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:08.237 [2024-07-12 20:25:02.147688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78260 ] 00:08:08.237 [2024-07-12 20:25:02.301823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.237 [2024-07-12 20:25:02.323613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.497 [2024-07-12 20:25:02.426915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.497 20:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:09.873 20:25:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.873 00:08:09.873 real 0m1.597s 00:08:09.873 user 0m0.019s 00:08:09.873 sys 0m0.004s 00:08:09.873 20:25:03 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.873 ************************************ 00:08:09.873 END TEST accel_dif_generate 00:08:09.873 ************************************ 00:08:09.873 20:25:03 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:09.873 20:25:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.873 20:25:03 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:09.873 20:25:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:09.873 20:25:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.873 20:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.873 ************************************ 00:08:09.873 START TEST accel_dif_generate_copy 00:08:09.873 ************************************ 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:09.873 20:25:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:09.873 [2024-07-12 20:25:03.777726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:09.874 [2024-07-12 20:25:03.777910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78290 ] 00:08:09.874 [2024-07-12 20:25:03.928950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.874 [2024-07-12 20:25:03.946405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.132 [2024-07-12 20:25:04.039708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.132 20:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.551 00:08:11.551 real 0m1.561s 00:08:11.551 user 0m1.294s 00:08:11.551 sys 0m0.173s 00:08:11.551 ************************************ 00:08:11.551 END TEST accel_dif_generate_copy 00:08:11.551 ************************************ 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.551 20:25:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.551 20:25:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.551 20:25:05 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:11.552 20:25:05 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.552 20:25:05 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:11.552 20:25:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.552 20:25:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.552 ************************************ 00:08:11.552 START TEST accel_comp 00:08:11.552 ************************************ 00:08:11.552 20:25:05 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:11.552 20:25:05 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:11.552 [2024-07-12 20:25:05.388910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:11.552 [2024-07-12 20:25:05.389108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78331 ] 00:08:11.552 [2024-07-12 20:25:05.544440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.552 [2024-07-12 20:25:05.566927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.552 [2024-07-12 20:25:05.665588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.810 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:11.811 20:25:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.186 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:13.187 20:25:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.187 00:08:13.187 real 0m1.574s 00:08:13.187 user 0m1.302s 00:08:13.187 sys 0m0.174s 00:08:13.187 20:25:06 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.187 20:25:06 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:13.187 ************************************ 00:08:13.187 END TEST accel_comp 00:08:13.187 ************************************ 00:08:13.187 20:25:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.187 20:25:06 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.187 20:25:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:13.187 20:25:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.187 20:25:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.187 ************************************ 00:08:13.187 START TEST accel_decomp 00:08:13.187 ************************************ 00:08:13.187 20:25:06 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:13.187 20:25:06 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:13.187 [2024-07-12 20:25:07.009761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:13.187 [2024-07-12 20:25:07.009983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78365 ] 00:08:13.187 [2024-07-12 20:25:07.160646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.187 [2024-07-12 20:25:07.183420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.187 [2024-07-12 20:25:07.280908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:13.446 20:25:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.382 20:25:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.641 20:25:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.641 20:25:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:14.641 20:25:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.641 00:08:14.641 real 0m1.573s 00:08:14.641 user 0m0.012s 00:08:14.641 sys 0m0.005s 00:08:14.641 20:25:08 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.641 20:25:08 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:14.641 ************************************ 00:08:14.641 END TEST accel_decomp 00:08:14.641 ************************************ 00:08:14.641 20:25:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.641 20:25:08 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.641 20:25:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:14.641 20:25:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.641 20:25:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.641 ************************************ 00:08:14.641 START TEST accel_decomp_full 00:08:14.641 ************************************ 00:08:14.641 20:25:08 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:14.641 20:25:08 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:14.641 [2024-07-12 20:25:08.632978] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:14.641 [2024-07-12 20:25:08.633169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78402 ] 00:08:14.641 [2024-07-12 20:25:08.784146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.900 [2024-07-12 20:25:08.806636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.900 [2024-07-12 20:25:08.902971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:14.900 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:14.901 20:25:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:16.292 20:25:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.292 00:08:16.292 real 0m1.579s 00:08:16.292 user 0m0.020s 00:08:16.292 sys 0m0.002s 00:08:16.292 20:25:10 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.292 ************************************ 00:08:16.292 END TEST accel_decomp_full 00:08:16.292 ************************************ 00:08:16.292 20:25:10 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:16.292 20:25:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:16.292 20:25:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.292 20:25:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:16.292 20:25:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.292 20:25:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.292 ************************************ 00:08:16.292 START TEST accel_decomp_mcore 00:08:16.292 ************************************ 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:16.292 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:16.292 [2024-07-12 20:25:10.256986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:16.292 [2024-07-12 20:25:10.257191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78443 ] 00:08:16.292 [2024-07-12 20:25:10.412169] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.292 [2024-07-12 20:25:10.433539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.551 [2024-07-12 20:25:10.538377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.551 [2024-07-12 20:25:10.538532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.551 [2024-07-12 20:25:10.538661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.551 [2024-07-12 20:25:10.538856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.551 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:16.552 20:25:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.927 00:08:17.927 real 0m1.600s 00:08:17.927 user 0m0.014s 00:08:17.927 sys 0m0.005s 00:08:17.927 ************************************ 00:08:17.927 END TEST accel_decomp_mcore 00:08:17.927 ************************************ 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.927 20:25:11 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:17.927 20:25:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.927 20:25:11 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.927 20:25:11 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:17.927 20:25:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.927 20:25:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.927 ************************************ 00:08:17.927 START TEST accel_decomp_full_mcore 00:08:17.927 ************************************ 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:17.927 20:25:11 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:17.927 [2024-07-12 20:25:11.897729] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:17.927 [2024-07-12 20:25:11.898595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78476 ] 00:08:17.927 [2024-07-12 20:25:12.041478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.927 [2024-07-12 20:25:12.059595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.186 [2024-07-12 20:25:12.161177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.186 [2024-07-12 20:25:12.161362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.186 [2024-07-12 20:25:12.161458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.186 [2024-07-12 20:25:12.161616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.186 20:25:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.561 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.561 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.561 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.561 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.561 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.562 00:08:19.562 real 0m1.583s 00:08:19.562 user 0m4.786s 00:08:19.562 sys 0m0.197s 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.562 20:25:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:19.562 ************************************ 00:08:19.562 END TEST accel_decomp_full_mcore 00:08:19.562 ************************************ 00:08:19.562 20:25:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.562 20:25:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:19.562 20:25:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:19.562 20:25:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.562 20:25:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.562 ************************************ 00:08:19.562 START TEST accel_decomp_mthread 00:08:19.562 ************************************ 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:19.562 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:19.562 [2024-07-12 20:25:13.528470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:19.562 [2024-07-12 20:25:13.528636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78520 ] 00:08:19.562 [2024-07-12 20:25:13.670397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.562 [2024-07-12 20:25:13.686461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.821 [2024-07-12 20:25:13.787310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.821 20:25:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 ************************************ 00:08:21.229 END TEST accel_decomp_mthread 00:08:21.229 ************************************ 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.229 00:08:21.229 real 0m1.552s 00:08:21.229 user 0m0.019s 00:08:21.229 sys 0m0.002s 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.229 20:25:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:21.229 20:25:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.229 20:25:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.229 20:25:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:21.229 20:25:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.229 20:25:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.229 ************************************ 00:08:21.229 START TEST accel_decomp_full_mthread 00:08:21.229 ************************************ 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.229 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:21.230 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:21.230 [2024-07-12 20:25:15.142372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:21.230 [2024-07-12 20:25:15.142552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78550 ] 00:08:21.230 [2024-07-12 20:25:15.285122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.230 [2024-07-12 20:25:15.309951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.488 [2024-07-12 20:25:15.418380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.488 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.489 20:25:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.865 00:08:22.865 real 0m1.618s 00:08:22.865 user 0m1.350s 00:08:22.865 sys 0m0.176s 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.865 20:25:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:22.865 ************************************ 00:08:22.865 END TEST accel_decomp_full_mthread 00:08:22.865 ************************************ 00:08:22.865 20:25:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:22.865 20:25:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:22.865 20:25:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:22.865 20:25:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:22.865 20:25:16 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:22.865 20:25:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.865 20:25:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.866 20:25:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.866 20:25:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.866 20:25:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.866 20:25:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.866 20:25:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.866 20:25:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:22.866 20:25:16 accel -- accel/accel.sh@41 -- # jq -r . 00:08:22.866 ************************************ 00:08:22.866 START TEST accel_dif_functional_tests 00:08:22.866 ************************************ 00:08:22.866 20:25:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:22.866 [2024-07-12 20:25:16.862788] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:22.866 [2024-07-12 20:25:16.862981] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78592 ] 00:08:23.124 [2024-07-12 20:25:17.016945] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.124 [2024-07-12 20:25:17.035844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.124 [2024-07-12 20:25:17.138130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.124 [2024-07-12 20:25:17.138222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.124 [2024-07-12 20:25:17.138207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.124 00:08:23.124 00:08:23.124 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.124 http://cunit.sourceforge.net/ 00:08:23.124 00:08:23.124 00:08:23.124 Suite: accel_dif 00:08:23.124 Test: verify: DIF generated, GUARD check ...passed 00:08:23.124 Test: verify: DIF generated, APPTAG check ...passed 00:08:23.124 Test: verify: DIF generated, REFTAG check ...passed 00:08:23.124 Test: verify: DIF not generated, GUARD check ...[2024-07-12 20:25:17.239846] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:23.124 passed 00:08:23.124 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 20:25:17.240287] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:23.124 passed 00:08:23.124 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 20:25:17.240469] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:23.124 passed 00:08:23.124 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:23.124 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:23.124 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:23.124 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-12 20:25:17.240689] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:23.124 passed 00:08:23.124 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:23.124 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 20:25:17.241100] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:23.124 passed 00:08:23.124 Test: verify copy: DIF generated, GUARD check ...passed 00:08:23.124 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:23.124 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:23.124 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 20:25:17.241542] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:23.124 passed 00:08:23.124 Test: verify copy: DIF not generated, APPTAG check ...passed 00:08:23.124 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 20:25:17.241757] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:23.124 [2024-07-12 20:25:17.241902] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:23.124 passed 00:08:23.124 Test: generate copy: DIF generated, GUARD check ...passed 00:08:23.124 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:23.124 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:23.124 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:23.124 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:23.124 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:23.124 Test: generate copy: iovecs-len validate ...[2024-07-12 20:25:17.242590] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:23.124 passed 00:08:23.124 Test: generate copy: buffer alignment validate ...passed 00:08:23.124 00:08:23.124 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.124 suites 1 1 n/a 0 0 00:08:23.124 tests 26 26 26 0 0 00:08:23.124 asserts 115 115 115 0 n/a 00:08:23.124 00:08:23.124 Elapsed time = 0.009 seconds 00:08:23.381 00:08:23.381 real 0m0.735s 00:08:23.381 user 0m0.887s 00:08:23.381 sys 0m0.249s 00:08:23.381 20:25:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.381 20:25:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:23.381 ************************************ 00:08:23.381 END TEST accel_dif_functional_tests 00:08:23.381 ************************************ 00:08:23.639 20:25:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.639 00:08:23.639 real 0m36.411s 00:08:23.639 user 0m36.839s 00:08:23.639 sys 0m5.457s 00:08:23.639 20:25:17 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.639 ************************************ 00:08:23.639 END TEST accel 00:08:23.639 ************************************ 00:08:23.639 20:25:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.639 20:25:17 -- common/autotest_common.sh@1142 -- # return 0 00:08:23.639 20:25:17 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:23.639 20:25:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.639 20:25:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.639 20:25:17 -- common/autotest_common.sh@10 -- # set +x 00:08:23.639 ************************************ 00:08:23.639 START TEST accel_rpc 00:08:23.639 ************************************ 00:08:23.639 20:25:17 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:23.639 * Looking for test storage... 00:08:23.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:23.639 20:25:17 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:23.639 20:25:17 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=78663 00:08:23.639 20:25:17 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:23.639 20:25:17 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 78663 00:08:23.639 20:25:17 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 78663 ']' 00:08:23.639 20:25:17 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.639 20:25:17 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.639 20:25:17 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.639 20:25:17 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.639 20:25:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.896 [2024-07-12 20:25:17.796751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:23.896 [2024-07-12 20:25:17.796989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78663 ] 00:08:23.896 [2024-07-12 20:25:17.950357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.896 [2024-07-12 20:25:17.970464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.153 [2024-07-12 20:25:18.066301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.721 20:25:18 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.721 20:25:18 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:24.721 20:25:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:24.721 20:25:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:24.721 20:25:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:24.721 20:25:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:24.721 20:25:18 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:24.721 20:25:18 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:24.721 20:25:18 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.721 20:25:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.721 ************************************ 00:08:24.721 START TEST accel_assign_opcode 00:08:24.721 ************************************ 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.721 [2024-07-12 20:25:18.699280] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.721 [2024-07-12 20:25:18.707307] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.721 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.980 software 00:08:24.980 00:08:24.980 real 0m0.306s 00:08:24.980 user 0m0.056s 00:08:24.980 sys 0m0.008s 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.980 20:25:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.980 ************************************ 00:08:24.980 END TEST accel_assign_opcode 00:08:24.980 ************************************ 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:24.980 20:25:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 78663 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 78663 ']' 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 78663 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78663 00:08:24.980 killing process with pid 78663 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78663' 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@967 -- # kill 78663 00:08:24.980 20:25:19 accel_rpc -- common/autotest_common.sh@972 -- # wait 78663 00:08:25.563 ************************************ 00:08:25.563 END TEST accel_rpc 00:08:25.563 ************************************ 00:08:25.563 00:08:25.563 real 0m1.929s 00:08:25.563 user 0m1.908s 00:08:25.563 sys 0m0.547s 00:08:25.563 20:25:19 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.563 20:25:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.563 20:25:19 -- common/autotest_common.sh@1142 -- # return 0 00:08:25.563 20:25:19 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:25.563 20:25:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:25.563 20:25:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.563 20:25:19 -- common/autotest_common.sh@10 -- # set +x 00:08:25.563 ************************************ 00:08:25.563 START TEST app_cmdline 00:08:25.563 ************************************ 00:08:25.563 20:25:19 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:25.563 * Looking for test storage... 00:08:25.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:25.563 20:25:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:25.563 20:25:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=78756 00:08:25.563 20:25:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:25.563 20:25:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 78756 00:08:25.563 20:25:19 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 78756 ']' 00:08:25.563 20:25:19 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.563 20:25:19 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.563 20:25:19 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.563 20:25:19 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.563 20:25:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.821 [2024-07-12 20:25:19.762523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:25.821 [2024-07-12 20:25:19.762979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78756 ] 00:08:25.821 [2024-07-12 20:25:19.918822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.821 [2024-07-12 20:25:19.943749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.077 [2024-07-12 20:25:20.047836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.640 20:25:20 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.640 20:25:20 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:26.640 20:25:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:26.898 { 00:08:26.898 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:26.898 "fields": { 00:08:26.898 "major": 24, 00:08:26.898 "minor": 9, 00:08:26.898 "patch": 0, 00:08:26.898 "suffix": "-pre", 00:08:26.898 "commit": "719d03c6a" 00:08:26.898 } 00:08:26.898 } 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:26.898 20:25:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:26.898 20:25:20 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.156 request: 00:08:27.156 { 00:08:27.156 "method": "env_dpdk_get_mem_stats", 00:08:27.156 "req_id": 1 00:08:27.156 } 00:08:27.156 Got JSON-RPC error response 00:08:27.156 response: 00:08:27.156 { 00:08:27.156 "code": -32601, 00:08:27.156 "message": "Method not found" 00:08:27.156 } 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:27.156 20:25:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 78756 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 78756 ']' 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 78756 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78756 00:08:27.156 killing process with pid 78756 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78756' 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@967 -- # kill 78756 00:08:27.156 20:25:21 app_cmdline -- common/autotest_common.sh@972 -- # wait 78756 00:08:27.720 ************************************ 00:08:27.720 END TEST app_cmdline 00:08:27.720 ************************************ 00:08:27.720 00:08:27.720 real 0m2.139s 00:08:27.720 user 0m2.543s 00:08:27.720 sys 0m0.567s 00:08:27.720 20:25:21 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.720 20:25:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:27.720 20:25:21 -- common/autotest_common.sh@1142 -- # return 0 00:08:27.720 20:25:21 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:27.720 20:25:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:27.720 20:25:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.720 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:08:27.720 ************************************ 00:08:27.720 START TEST version 00:08:27.720 ************************************ 00:08:27.720 20:25:21 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:27.720 * Looking for test storage... 00:08:27.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:27.720 20:25:21 version -- app/version.sh@17 -- # get_header_version major 00:08:27.720 20:25:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # cut -f2 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.720 20:25:21 version -- app/version.sh@17 -- # major=24 00:08:27.720 20:25:21 version -- app/version.sh@18 -- # get_header_version minor 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # cut -f2 00:08:27.720 20:25:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.720 20:25:21 version -- app/version.sh@18 -- # minor=9 00:08:27.720 20:25:21 version -- app/version.sh@19 -- # get_header_version patch 00:08:27.720 20:25:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # cut -f2 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.720 20:25:21 version -- app/version.sh@19 -- # patch=0 00:08:27.720 20:25:21 version -- app/version.sh@20 -- # get_header_version suffix 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # cut -f2 00:08:27.720 20:25:21 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.720 20:25:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:27.720 20:25:21 version -- app/version.sh@20 -- # suffix=-pre 00:08:27.720 20:25:21 version -- app/version.sh@22 -- # version=24.9 00:08:27.720 20:25:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:27.720 20:25:21 version -- app/version.sh@28 -- # version=24.9rc0 00:08:27.720 20:25:21 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:27.720 20:25:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:27.978 20:25:21 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:27.978 20:25:21 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:27.978 00:08:27.978 real 0m0.150s 00:08:27.978 user 0m0.088s 00:08:27.978 sys 0m0.092s 00:08:27.978 ************************************ 00:08:27.978 END TEST version 00:08:27.978 ************************************ 00:08:27.978 20:25:21 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.978 20:25:21 version -- common/autotest_common.sh@10 -- # set +x 00:08:27.978 20:25:21 -- common/autotest_common.sh@1142 -- # return 0 00:08:27.978 20:25:21 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:27.978 20:25:21 -- spdk/autotest.sh@198 -- # uname -s 00:08:27.978 20:25:21 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:27.978 20:25:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:27.978 20:25:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:27.978 20:25:21 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:27.978 20:25:21 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:27.978 20:25:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.978 20:25:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.978 20:25:21 -- common/autotest_common.sh@10 -- # set +x 00:08:27.978 ************************************ 00:08:27.978 START TEST blockdev_nvme 00:08:27.978 ************************************ 00:08:27.978 20:25:21 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:27.978 * Looking for test storage... 00:08:27.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:27.978 20:25:22 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78902 00:08:27.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.978 20:25:22 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.979 20:25:22 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 78902 00:08:27.979 20:25:22 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 78902 ']' 00:08:27.979 20:25:22 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.979 20:25:22 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.979 20:25:22 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:27.979 20:25:22 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.979 20:25:22 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.979 20:25:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.237 [2024-07-12 20:25:22.155697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:28.237 [2024-07-12 20:25:22.156464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78902 ] 00:08:28.237 [2024-07-12 20:25:22.300614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.237 [2024-07-12 20:25:22.322422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.494 [2024-07-12 20:25:22.426151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.058 20:25:23 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.058 20:25:23 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:08:29.058 20:25:23 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:29.058 20:25:23 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:29.058 20:25:23 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:29.058 20:25:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:29.059 20:25:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:29.059 20:25:23 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:29.059 20:25:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.059 20:25:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.317 20:25:23 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.317 20:25:23 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:29.317 20:25:23 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.317 20:25:23 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.317 20:25:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.576 20:25:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.576 20:25:23 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:29.576 20:25:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.576 20:25:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.576 20:25:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.576 20:25:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:29.576 20:25:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:29.576 20:25:23 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:29.576 20:25:23 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.576 20:25:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.576 20:25:23 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.576 20:25:23 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:29.576 20:25:23 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:29.577 20:25:23 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3db63be6-c900-4caf-84b4-4d7f257f6583"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3db63be6-c900-4caf-84b4-4d7f257f6583",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "eed3d901-1c39-4330-b206-6996ad66b345"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eed3d901-1c39-4330-b206-6996ad66b345",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ba65c3dc-e8c1-458a-9e81-086d848a1658"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba65c3dc-e8c1-458a-9e81-086d848a1658",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5f951918-2c2a-46bb-a520-ec45e89d0dcd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5f951918-2c2a-46bb-a520-ec45e89d0dcd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b5c74820-ebb6-4080-bae2-15d165f7fda7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b5c74820-ebb6-4080-bae2-15d165f7fda7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bad74411-57b2-4ce9-bd23-ad8ca342c479"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bad74411-57b2-4ce9-bd23-ad8ca342c479",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:29.577 20:25:23 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:29.577 20:25:23 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:29.577 20:25:23 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:29.577 20:25:23 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 78902 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 78902 ']' 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 78902 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78902 00:08:29.577 killing process with pid 78902 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78902' 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 78902 00:08:29.577 20:25:23 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 78902 00:08:30.144 20:25:24 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:30.144 20:25:24 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:30.144 20:25:24 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:30.144 20:25:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.144 20:25:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.144 ************************************ 00:08:30.144 START TEST bdev_hello_world 00:08:30.144 ************************************ 00:08:30.144 20:25:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:30.144 [2024-07-12 20:25:24.199122] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:30.144 [2024-07-12 20:25:24.199319] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78975 ] 00:08:30.403 [2024-07-12 20:25:24.341823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.404 [2024-07-12 20:25:24.360385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.404 [2024-07-12 20:25:24.458297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.986 [2024-07-12 20:25:24.879363] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:30.986 [2024-07-12 20:25:24.879446] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:30.986 [2024-07-12 20:25:24.879487] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:30.986 [2024-07-12 20:25:24.882146] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:30.986 [2024-07-12 20:25:24.882826] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:30.986 [2024-07-12 20:25:24.882872] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:30.986 [2024-07-12 20:25:24.883082] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:30.986 00:08:30.986 [2024-07-12 20:25:24.883128] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:31.244 ************************************ 00:08:31.244 END TEST bdev_hello_world 00:08:31.244 ************************************ 00:08:31.244 00:08:31.244 real 0m1.028s 00:08:31.245 user 0m0.696s 00:08:31.245 sys 0m0.226s 00:08:31.245 20:25:25 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.245 20:25:25 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 20:25:25 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:31.245 20:25:25 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:31.245 20:25:25 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.245 20:25:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.245 20:25:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 ************************************ 00:08:31.245 START TEST bdev_bounds 00:08:31.245 ************************************ 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=79006 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:31.245 Process bdevio pid: 79006 00:08:31.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 79006' 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 79006 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 79006 ']' 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.245 20:25:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 [2024-07-12 20:25:25.278849] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:31.245 [2024-07-12 20:25:25.279033] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79006 ] 00:08:31.503 [2024-07-12 20:25:25.422294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.503 [2024-07-12 20:25:25.441293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.503 [2024-07-12 20:25:25.537786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.503 [2024-07-12 20:25:25.537879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.503 [2024-07-12 20:25:25.537933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.070 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.070 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:32.070 20:25:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:32.330 I/O targets: 00:08:32.330 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:32.330 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:32.330 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:32.330 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:32.330 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:32.330 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:32.330 00:08:32.330 00:08:32.330 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.330 http://cunit.sourceforge.net/ 00:08:32.330 00:08:32.330 00:08:32.330 Suite: bdevio tests on: Nvme3n1 00:08:32.330 Test: blockdev write read block ...passed 00:08:32.330 Test: blockdev write zeroes read block ...passed 00:08:32.330 Test: blockdev write zeroes read no split ...passed 00:08:32.330 Test: blockdev write zeroes read split ...passed 00:08:32.330 Test: blockdev write zeroes read split partial ...passed 00:08:32.330 Test: blockdev reset ...[2024-07-12 20:25:26.297022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:32.330 passed 00:08:32.330 Test: blockdev write read 8 blocks ...[2024-07-12 20:25:26.299477] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:32.330 passed 00:08:32.330 Test: blockdev write read size > 128k ...passed 00:08:32.330 Test: blockdev write read invalid size ...passed 00:08:32.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:32.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:32.330 Test: blockdev write read max offset ...passed 00:08:32.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:32.330 Test: blockdev writev readv 8 blocks ...passed 00:08:32.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:32.330 Test: blockdev writev readv block ...passed 00:08:32.330 Test: blockdev writev readv size > 128k ...passed 00:08:32.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:32.330 Test: blockdev comparev and writev ...[2024-07-12 20:25:26.305654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc04000 len:0x1000 00:08:32.330 [2024-07-12 20:25:26.305731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev nvme passthru rw ...passed 00:08:32.330 Test: blockdev nvme passthru vendor specific ...passed 00:08:32.330 Test: blockdev nvme admin passthru ...[2024-07-12 20:25:26.306701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:32.330 [2024-07-12 20:25:26.306765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev copy ...passed 00:08:32.330 Suite: bdevio tests on: Nvme2n3 00:08:32.330 Test: blockdev write read block ...passed 00:08:32.330 Test: blockdev write zeroes read block ...passed 00:08:32.330 Test: blockdev write zeroes read no split ...passed 00:08:32.330 Test: blockdev write zeroes read split ...passed 00:08:32.330 Test: blockdev write zeroes read split partial ...passed 00:08:32.330 Test: blockdev reset ...[2024-07-12 20:25:26.326504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:32.330 [2024-07-12 20:25:26.329424] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:32.330 passed 00:08:32.330 Test: blockdev write read 8 blocks ...passed 00:08:32.330 Test: blockdev write read size > 128k ...passed 00:08:32.330 Test: blockdev write read invalid size ...passed 00:08:32.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:32.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:32.330 Test: blockdev write read max offset ...passed 00:08:32.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:32.330 Test: blockdev writev readv 8 blocks ...passed 00:08:32.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:32.330 Test: blockdev writev readv block ...passed 00:08:32.330 Test: blockdev writev readv size > 128k ...passed 00:08:32.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:32.330 Test: blockdev comparev and writev ...[2024-07-12 20:25:26.335937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc02000 len:0x1000 00:08:32.330 [2024-07-12 20:25:26.336013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev nvme passthru rw ...passed 00:08:32.330 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:25:26.336839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:32.330 [2024-07-12 20:25:26.336880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev nvme admin passthru ...passed 00:08:32.330 Test: blockdev copy ...passed 00:08:32.330 Suite: bdevio tests on: Nvme2n2 00:08:32.330 Test: blockdev write read block ...passed 00:08:32.330 Test: blockdev write zeroes read block ...passed 00:08:32.330 Test: blockdev write zeroes read no split ...passed 00:08:32.330 Test: blockdev write zeroes read split ...passed 00:08:32.330 Test: blockdev write zeroes read split partial ...passed 00:08:32.330 Test: blockdev reset ...[2024-07-12 20:25:26.363924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:32.330 [2024-07-12 20:25:26.366877] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:32.330 passed 00:08:32.330 Test: blockdev write read 8 blocks ...passed 00:08:32.330 Test: blockdev write read size > 128k ...passed 00:08:32.330 Test: blockdev write read invalid size ...passed 00:08:32.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:32.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:32.330 Test: blockdev write read max offset ...passed 00:08:32.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:32.330 Test: blockdev writev readv 8 blocks ...passed 00:08:32.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:32.330 Test: blockdev writev readv block ...passed 00:08:32.330 Test: blockdev writev readv size > 128k ...passed 00:08:32.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:32.330 Test: blockdev comparev and writev ...[2024-07-12 20:25:26.373195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc0c000 len:0x1000 00:08:32.330 [2024-07-12 20:25:26.373281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev nvme passthru rw ...passed 00:08:32.330 Test: blockdev nvme passthru vendor specific ...passed 00:08:32.330 Test: blockdev nvme admin passthru ...[2024-07-12 20:25:26.374077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:32.330 [2024-07-12 20:25:26.374121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev copy ...passed 00:08:32.330 Suite: bdevio tests on: Nvme2n1 00:08:32.330 Test: blockdev write read block ...passed 00:08:32.330 Test: blockdev write zeroes read block ...passed 00:08:32.330 Test: blockdev write zeroes read no split ...passed 00:08:32.330 Test: blockdev write zeroes read split ...passed 00:08:32.330 Test: blockdev write zeroes read split partial ...passed 00:08:32.330 Test: blockdev reset ...[2024-07-12 20:25:26.391219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:32.330 passed 00:08:32.330 Test: blockdev write read 8 blocks ...[2024-07-12 20:25:26.394046] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:32.330 passed 00:08:32.330 Test: blockdev write read size > 128k ...passed 00:08:32.330 Test: blockdev write read invalid size ...passed 00:08:32.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:32.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:32.330 Test: blockdev write read max offset ...passed 00:08:32.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:32.330 Test: blockdev writev readv 8 blocks ...passed 00:08:32.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:32.330 Test: blockdev writev readv block ...passed 00:08:32.330 Test: blockdev writev readv size > 128k ...passed 00:08:32.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:32.330 Test: blockdev comparev and writev ...[2024-07-12 20:25:26.400857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:32.330 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cf836000 len:0x1000 00:08:32.330 [2024-07-12 20:25:26.401070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:25:26.402235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:32.330 [2024-07-12 20:25:26.402287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.330 Test: blockdev nvme admin passthru ...passed 00:08:32.330 Test: blockdev copy ...passed 00:08:32.330 Suite: bdevio tests on: Nvme1n1 00:08:32.330 Test: blockdev write read block ...passed 00:08:32.330 Test: blockdev write zeroes read block ...passed 00:08:32.330 Test: blockdev write zeroes read no split ...passed 00:08:32.330 Test: blockdev write zeroes read split ...passed 00:08:32.330 Test: blockdev write zeroes read split partial ...passed 00:08:32.330 Test: blockdev reset ...[2024-07-12 20:25:26.431407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:32.330 [2024-07-12 20:25:26.433758] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:32.330 passed 00:08:32.330 Test: blockdev write read 8 blocks ...passed 00:08:32.330 Test: blockdev write read size > 128k ...passed 00:08:32.330 Test: blockdev write read invalid size ...passed 00:08:32.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:32.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:32.330 Test: blockdev write read max offset ...passed 00:08:32.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:32.330 Test: blockdev writev readv 8 blocks ...passed 00:08:32.330 Test: blockdev writev readv 30 x 1block ...passed 00:08:32.330 Test: blockdev writev readv block ...passed 00:08:32.330 Test: blockdev writev readv size > 128k ...passed 00:08:32.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:32.330 Test: blockdev comparev and writev ...[2024-07-12 20:25:26.441426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf832000 len:0x1000 00:08:32.330 [2024-07-12 20:25:26.441499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:32.330 passed 00:08:32.331 Test: blockdev nvme passthru rw ...passed 00:08:32.331 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:25:26.442292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:32.331 [2024-07-12 20:25:26.442333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:32.331 passed 00:08:32.331 Test: blockdev nvme admin passthru ...passed 00:08:32.331 Test: blockdev copy ...passed 00:08:32.331 Suite: bdevio tests on: Nvme0n1 00:08:32.331 Test: blockdev write read block ...passed 00:08:32.331 Test: blockdev write zeroes read block ...passed 00:08:32.331 Test: blockdev write zeroes read no split ...passed 00:08:32.331 Test: blockdev write zeroes read split ...passed 00:08:32.331 Test: blockdev write zeroes read split partial ...passed 00:08:32.331 Test: blockdev reset ...[2024-07-12 20:25:26.468132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:32.331 passed 00:08:32.331 Test: blockdev write read 8 blocks ...[2024-07-12 20:25:26.470462] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:32.331 passed 00:08:32.331 Test: blockdev write read size > 128k ...passed 00:08:32.331 Test: blockdev write read invalid size ...passed 00:08:32.331 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:32.331 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:32.331 Test: blockdev write read max offset ...passed 00:08:32.331 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:32.331 Test: blockdev writev readv 8 blocks ...passed 00:08:32.331 Test: blockdev writev readv 30 x 1block ...passed 00:08:32.331 Test: blockdev writev readv block ...passed 00:08:32.331 Test: blockdev writev readv size > 128k ...passed 00:08:32.331 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:32.331 Test: blockdev comparev and writev ...[2024-07-12 20:25:26.475334] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:32.331 separate metadata which is not supported yet. 00:08:32.331 passed 00:08:32.331 Test: blockdev nvme passthru rw ...passed 00:08:32.331 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:25:26.476110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:32.331 [2024-07-12 20:25:26.476164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:32.331 passed 00:08:32.590 Test: blockdev nvme admin passthru ...passed 00:08:32.590 Test: blockdev copy ...passed 00:08:32.590 00:08:32.590 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.590 suites 6 6 n/a 0 0 00:08:32.590 tests 138 138 138 0 0 00:08:32.590 asserts 893 893 893 0 n/a 00:08:32.590 00:08:32.590 Elapsed time = 0.463 seconds 00:08:32.590 0 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 79006 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 79006 ']' 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 79006 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79006 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79006' 00:08:32.590 killing process with pid 79006 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 79006 00:08:32.590 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 79006 00:08:32.850 20:25:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:32.850 00:08:32.850 real 0m1.555s 00:08:32.850 user 0m3.730s 00:08:32.850 sys 0m0.374s 00:08:32.850 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.850 20:25:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:32.850 ************************************ 00:08:32.850 END TEST bdev_bounds 00:08:32.850 ************************************ 00:08:32.850 20:25:26 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:32.850 20:25:26 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:32.850 20:25:26 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:32.850 20:25:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.850 20:25:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.850 ************************************ 00:08:32.850 START TEST bdev_nbd 00:08:32.850 ************************************ 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=79055 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 79055 /var/tmp/spdk-nbd.sock 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 79055 ']' 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:32.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.850 20:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:32.850 [2024-07-12 20:25:26.901741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:32.850 [2024-07-12 20:25:26.901920] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.110 [2024-07-12 20:25:27.046727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.110 [2024-07-12 20:25:27.063585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.110 [2024-07-12 20:25:27.163449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.046 20:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.046 1+0 records in 00:08:34.046 1+0 records out 00:08:34.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049634 s, 8.3 MB/s 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.046 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.304 1+0 records in 00:08:34.304 1+0 records out 00:08:34.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410025 s, 10.0 MB/s 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.304 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.562 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.821 1+0 records in 00:08:34.821 1+0 records out 00:08:34.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619686 s, 6.6 MB/s 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.821 20:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.080 1+0 records in 00:08:35.080 1+0 records out 00:08:35.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621378 s, 6.6 MB/s 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:35.080 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.339 1+0 records in 00:08:35.339 1+0 records out 00:08:35.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819376 s, 5.0 MB/s 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:35.339 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:35.597 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.598 1+0 records in 00:08:35.598 1+0 records out 00:08:35.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668138 s, 6.1 MB/s 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:35.598 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd0", 00:08:35.856 "bdev_name": "Nvme0n1" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd1", 00:08:35.856 "bdev_name": "Nvme1n1" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd2", 00:08:35.856 "bdev_name": "Nvme2n1" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd3", 00:08:35.856 "bdev_name": "Nvme2n2" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd4", 00:08:35.856 "bdev_name": "Nvme2n3" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd5", 00:08:35.856 "bdev_name": "Nvme3n1" 00:08:35.856 } 00:08:35.856 ]' 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd0", 00:08:35.856 "bdev_name": "Nvme0n1" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd1", 00:08:35.856 "bdev_name": "Nvme1n1" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd2", 00:08:35.856 "bdev_name": "Nvme2n1" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd3", 00:08:35.856 "bdev_name": "Nvme2n2" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd4", 00:08:35.856 "bdev_name": "Nvme2n3" 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "nbd_device": "/dev/nbd5", 00:08:35.856 "bdev_name": "Nvme3n1" 00:08:35.856 } 00:08:35.856 ]' 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.856 20:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.115 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.373 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.632 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.890 20:25:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.149 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.407 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.976 20:25:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:38.235 /dev/nbd0 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.235 1+0 records in 00:08:38.235 1+0 records out 00:08:38.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471789 s, 8.7 MB/s 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.235 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:38.236 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.236 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.236 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:38.236 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.236 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:38.236 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:38.493 /dev/nbd1 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.493 1+0 records in 00:08:38.493 1+0 records out 00:08:38.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448434 s, 9.1 MB/s 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:38.493 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.494 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:38.494 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:38.752 /dev/nbd10 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.752 1+0 records in 00:08:38.752 1+0 records out 00:08:38.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803521 s, 5.1 MB/s 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:38.752 20:25:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:39.009 /dev/nbd11 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.010 1+0 records in 00:08:39.010 1+0 records out 00:08:39.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00252746 s, 1.6 MB/s 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:39.010 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:39.577 /dev/nbd12 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.577 1+0 records in 00:08:39.577 1+0 records out 00:08:39.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554036 s, 7.4 MB/s 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:39.577 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:39.836 /dev/nbd13 00:08:39.836 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:39.836 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:39.836 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:39.836 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.837 1+0 records in 00:08:39.837 1+0 records out 00:08:39.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695537 s, 5.9 MB/s 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.837 20:25:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd0", 00:08:40.095 "bdev_name": "Nvme0n1" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd1", 00:08:40.095 "bdev_name": "Nvme1n1" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd10", 00:08:40.095 "bdev_name": "Nvme2n1" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd11", 00:08:40.095 "bdev_name": "Nvme2n2" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd12", 00:08:40.095 "bdev_name": "Nvme2n3" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd13", 00:08:40.095 "bdev_name": "Nvme3n1" 00:08:40.095 } 00:08:40.095 ]' 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd0", 00:08:40.095 "bdev_name": "Nvme0n1" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd1", 00:08:40.095 "bdev_name": "Nvme1n1" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd10", 00:08:40.095 "bdev_name": "Nvme2n1" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd11", 00:08:40.095 "bdev_name": "Nvme2n2" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd12", 00:08:40.095 "bdev_name": "Nvme2n3" 00:08:40.095 }, 00:08:40.095 { 00:08:40.095 "nbd_device": "/dev/nbd13", 00:08:40.095 "bdev_name": "Nvme3n1" 00:08:40.095 } 00:08:40.095 ]' 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:40.095 /dev/nbd1 00:08:40.095 /dev/nbd10 00:08:40.095 /dev/nbd11 00:08:40.095 /dev/nbd12 00:08:40.095 /dev/nbd13' 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:40.095 /dev/nbd1 00:08:40.095 /dev/nbd10 00:08:40.095 /dev/nbd11 00:08:40.095 /dev/nbd12 00:08:40.095 /dev/nbd13' 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:40.095 256+0 records in 00:08:40.095 256+0 records out 00:08:40.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00857054 s, 122 MB/s 00:08:40.095 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.096 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:40.354 256+0 records in 00:08:40.354 256+0 records out 00:08:40.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136515 s, 7.7 MB/s 00:08:40.354 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.354 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:40.354 256+0 records in 00:08:40.354 256+0 records out 00:08:40.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124007 s, 8.5 MB/s 00:08:40.354 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.354 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:40.613 256+0 records in 00:08:40.613 256+0 records out 00:08:40.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137613 s, 7.6 MB/s 00:08:40.613 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.613 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:40.613 256+0 records in 00:08:40.613 256+0 records out 00:08:40.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125745 s, 8.3 MB/s 00:08:40.613 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.613 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:40.871 256+0 records in 00:08:40.871 256+0 records out 00:08:40.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118234 s, 8.9 MB/s 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:40.871 256+0 records in 00:08:40.871 256+0 records out 00:08:40.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124411 s, 8.4 MB/s 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.871 20:25:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:40.871 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.871 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:40.871 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.871 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:40.871 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.871 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:41.130 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:41.130 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:41.131 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.131 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:41.131 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.131 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:41.131 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.131 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.389 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.648 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.907 20:25:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.166 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.735 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:42.993 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:42.993 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:42.993 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:42.993 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.994 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.994 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:42.994 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:42.994 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.994 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.994 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.994 20:25:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.252 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:43.252 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:43.252 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:43.253 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:43.512 malloc_lvol_verify 00:08:43.512 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:43.771 2b871df9-da07-4647-a6a1-b75c74ed8d34 00:08:43.771 20:25:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:44.029 e55e5f9d-8f0e-4c05-aa75-e792c569c9d3 00:08:44.029 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:44.287 /dev/nbd0 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:44.287 mke2fs 1.46.5 (30-Dec-2021) 00:08:44.287 Discarding device blocks: 0/4096 done 00:08:44.287 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:44.287 00:08:44.287 Allocating group tables: 0/1 done 00:08:44.287 Writing inode tables: 0/1 done 00:08:44.287 Creating journal (1024 blocks): done 00:08:44.287 Writing superblocks and filesystem accounting information: 0/1 done 00:08:44.287 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.287 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 79055 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 79055 ']' 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 79055 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79055 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:44.545 killing process with pid 79055 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79055' 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 79055 00:08:44.545 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 79055 00:08:44.803 20:25:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:44.803 00:08:44.803 real 0m12.142s 00:08:44.803 user 0m17.876s 00:08:44.803 sys 0m4.203s 00:08:44.803 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.803 20:25:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:44.803 ************************************ 00:08:44.803 END TEST bdev_nbd 00:08:44.803 ************************************ 00:08:45.062 20:25:38 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:45.062 20:25:38 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:45.062 20:25:38 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:45.062 skipping fio tests on NVMe due to multi-ns failures. 00:08:45.062 20:25:38 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:45.062 20:25:38 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:45.062 20:25:38 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:45.062 20:25:38 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:45.062 20:25:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.062 20:25:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.062 ************************************ 00:08:45.062 START TEST bdev_verify 00:08:45.062 ************************************ 00:08:45.062 20:25:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:45.062 [2024-07-12 20:25:39.086779] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:45.062 [2024-07-12 20:25:39.086963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79454 ] 00:08:45.320 [2024-07-12 20:25:39.230300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.320 [2024-07-12 20:25:39.252328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.320 [2024-07-12 20:25:39.350011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.320 [2024-07-12 20:25:39.350085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.887 Running I/O for 5 seconds... 00:08:51.152 00:08:51.152 Latency(us) 00:08:51.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.152 Verification LBA range: start 0x0 length 0xbd0bd 00:08:51.152 Nvme0n1 : 5.11 1454.22 5.68 0.00 0.00 87836.67 17754.30 77213.32 00:08:51.152 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.152 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:51.152 Nvme0n1 : 5.11 1452.77 5.67 0.00 0.00 87554.63 21448.15 77689.95 00:08:51.152 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.152 Verification LBA range: start 0x0 length 0xa0000 00:08:51.153 Nvme1n1 : 5.11 1453.52 5.68 0.00 0.00 87749.79 17158.52 72447.07 00:08:51.153 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0xa0000 length 0xa0000 00:08:51.153 Nvme1n1 : 5.11 1451.70 5.67 0.00 0.00 87463.03 20375.74 74353.57 00:08:51.153 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x0 length 0x80000 00:08:51.153 Nvme2n1 : 5.11 1452.68 5.67 0.00 0.00 87634.32 17635.14 69587.32 00:08:51.153 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x80000 length 0x80000 00:08:51.153 Nvme2n1 : 5.12 1450.49 5.67 0.00 0.00 87368.25 13822.14 78643.20 00:08:51.153 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x0 length 0x80000 00:08:51.153 Nvme2n2 : 5.11 1451.68 5.67 0.00 0.00 87540.64 18588.39 71970.44 00:08:51.153 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x80000 length 0x80000 00:08:51.153 Nvme2n2 : 5.12 1449.42 5.66 0.00 0.00 87279.43 9472.93 82932.83 00:08:51.153 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x0 length 0x80000 00:08:51.153 Nvme2n3 : 5.12 1450.55 5.67 0.00 0.00 87459.13 15728.64 76260.07 00:08:51.153 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x80000 length 0x80000 00:08:51.153 Nvme2n3 : 5.10 1454.46 5.68 0.00 0.00 87787.58 17039.36 82932.83 00:08:51.153 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x0 length 0x20000 00:08:51.153 Nvme3n1 : 5.12 1449.55 5.66 0.00 0.00 87367.53 10068.71 77689.95 00:08:51.153 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:51.153 Verification LBA range: start 0x20000 length 0x20000 00:08:51.153 Nvme3n1 : 5.11 1453.70 5.68 0.00 0.00 87659.43 20018.27 80073.08 00:08:51.153 =================================================================================================================== 00:08:51.153 Total : 17424.72 68.07 0.00 0.00 87558.37 9472.93 82932.83 00:08:51.411 00:08:51.411 real 0m6.482s 00:08:51.411 user 0m11.885s 00:08:51.411 sys 0m0.297s 00:08:51.411 20:25:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.411 20:25:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:51.411 ************************************ 00:08:51.411 END TEST bdev_verify 00:08:51.411 ************************************ 00:08:51.411 20:25:45 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:51.411 20:25:45 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:51.411 20:25:45 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:51.411 20:25:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.411 20:25:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.411 ************************************ 00:08:51.411 START TEST bdev_verify_big_io 00:08:51.411 ************************************ 00:08:51.411 20:25:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:51.669 [2024-07-12 20:25:45.629387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:51.669 [2024-07-12 20:25:45.629575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79541 ] 00:08:51.669 [2024-07-12 20:25:45.781850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.669 [2024-07-12 20:25:45.804106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.927 [2024-07-12 20:25:45.902382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.927 [2024-07-12 20:25:45.902450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.494 Running I/O for 5 seconds... 00:08:59.056 00:08:59.056 Latency(us) 00:08:59.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.056 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x0 length 0xbd0b 00:08:59.056 Nvme0n1 : 5.79 132.28 8.27 0.00 0.00 931909.24 19541.64 1029510.98 00:08:59.056 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:59.056 Nvme0n1 : 5.80 132.45 8.28 0.00 0.00 827713.44 10009.13 1143901.09 00:08:59.056 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x0 length 0xa000 00:08:59.056 Nvme1n1 : 5.79 130.32 8.14 0.00 0.00 905571.95 53620.36 960876.92 00:08:59.056 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0xa000 length 0xa000 00:08:59.056 Nvme1n1 : 5.58 114.61 7.16 0.00 0.00 1078514.22 30980.65 1105771.05 00:08:59.056 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x0 length 0x8000 00:08:59.056 Nvme2n1 : 5.80 129.17 8.07 0.00 0.00 897404.12 55765.18 1403185.34 00:08:59.056 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x8000 length 0x8000 00:08:59.056 Nvme2n1 : 5.74 115.13 7.20 0.00 0.00 1035240.94 109147.23 1021884.97 00:08:59.056 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x0 length 0x8000 00:08:59.056 Nvme2n2 : 5.85 135.53 8.47 0.00 0.00 832732.51 22758.87 1426063.36 00:08:59.056 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x8000 length 0x8000 00:08:59.056 Nvme2n2 : 5.75 115.04 7.19 0.00 0.00 1006864.42 155379.90 1029510.98 00:08:59.056 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x0 length 0x8000 00:08:59.056 Nvme2n3 : 5.87 139.25 8.70 0.00 0.00 784534.25 24546.21 1448941.38 00:08:59.056 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x8000 length 0x8000 00:08:59.056 Nvme2n3 : 5.79 129.52 8.10 0.00 0.00 896673.40 7804.74 1052389.00 00:08:59.056 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x0 length 0x2000 00:08:59.056 Nvme3n1 : 5.93 177.17 11.07 0.00 0.00 606982.40 1467.11 1098145.05 00:08:59.056 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:59.056 Verification LBA range: start 0x2000 length 0x2000 00:08:59.056 Nvme3n1 : 5.79 129.45 8.09 0.00 0.00 871643.10 7864.32 1075267.03 00:08:59.056 =================================================================================================================== 00:08:59.056 Total : 1579.92 98.75 0.00 0.00 874464.24 1467.11 1448941.38 00:08:59.056 00:08:59.056 real 0m7.490s 00:08:59.056 user 0m13.886s 00:08:59.056 sys 0m0.346s 00:08:59.057 20:25:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.057 20:25:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:59.057 ************************************ 00:08:59.057 END TEST bdev_verify_big_io 00:08:59.057 ************************************ 00:08:59.057 20:25:53 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:59.057 20:25:53 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:59.057 20:25:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:59.057 20:25:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.057 20:25:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.057 ************************************ 00:08:59.057 START TEST bdev_write_zeroes 00:08:59.057 ************************************ 00:08:59.057 20:25:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:59.057 [2024-07-12 20:25:53.167443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:08:59.057 [2024-07-12 20:25:53.167638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79645 ] 00:08:59.315 [2024-07-12 20:25:53.321664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.315 [2024-07-12 20:25:53.344415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.315 [2024-07-12 20:25:53.453699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.882 Running I/O for 1 seconds... 00:09:00.814 00:09:00.814 Latency(us) 00:09:00.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.814 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.814 Nvme0n1 : 1.02 7351.84 28.72 0.00 0.00 17360.33 10426.18 43372.92 00:09:00.814 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.814 Nvme1n1 : 1.02 7340.06 28.67 0.00 0.00 17360.01 11319.85 42181.35 00:09:00.814 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.814 Nvme2n1 : 1.02 7328.89 28.63 0.00 0.00 17316.10 11081.54 42181.35 00:09:00.814 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.814 Nvme2n2 : 1.02 7317.87 28.59 0.00 0.00 17259.46 11379.43 42181.35 00:09:00.814 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.814 Nvme2n3 : 1.02 7306.96 28.54 0.00 0.00 17234.52 9115.46 41943.04 00:09:00.814 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:00.814 Nvme3n1 : 1.03 7353.86 28.73 0.00 0.00 17158.82 8877.15 41704.73 00:09:00.814 =================================================================================================================== 00:09:00.814 Total : 43999.49 171.87 0.00 0.00 17281.36 8877.15 43372.92 00:09:01.379 00:09:01.379 real 0m2.161s 00:09:01.379 user 0m1.773s 00:09:01.379 sys 0m0.270s 00:09:01.379 20:25:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.379 20:25:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:01.379 ************************************ 00:09:01.379 END TEST bdev_write_zeroes 00:09:01.379 ************************************ 00:09:01.379 20:25:55 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:01.379 20:25:55 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:01.379 20:25:55 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:01.379 20:25:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.379 20:25:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.379 ************************************ 00:09:01.379 START TEST bdev_json_nonenclosed 00:09:01.379 ************************************ 00:09:01.379 20:25:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:01.379 [2024-07-12 20:25:55.369434] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:01.379 [2024-07-12 20:25:55.369626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79687 ] 00:09:01.379 [2024-07-12 20:25:55.513503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:01.636 [2024-07-12 20:25:55.534583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.636 [2024-07-12 20:25:55.634284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.636 [2024-07-12 20:25:55.634414] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:01.636 [2024-07-12 20:25:55.634456] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:01.636 [2024-07-12 20:25:55.634486] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.636 00:09:01.636 real 0m0.487s 00:09:01.636 user 0m0.241s 00:09:01.636 sys 0m0.141s 00:09:01.636 20:25:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:01.636 20:25:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.636 20:25:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:01.636 ************************************ 00:09:01.636 END TEST bdev_json_nonenclosed 00:09:01.636 ************************************ 00:09:01.892 20:25:55 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:09:01.892 20:25:55 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:09:01.892 20:25:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:01.892 20:25:55 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:01.892 20:25:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.892 20:25:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.892 ************************************ 00:09:01.892 START TEST bdev_json_nonarray 00:09:01.892 ************************************ 00:09:01.892 20:25:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:01.892 [2024-07-12 20:25:55.920046] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:01.892 [2024-07-12 20:25:55.920259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79707 ] 00:09:02.149 [2024-07-12 20:25:56.071983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.149 [2024-07-12 20:25:56.092481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.149 [2024-07-12 20:25:56.186748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.149 [2024-07-12 20:25:56.186896] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:02.149 [2024-07-12 20:25:56.186933] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:02.149 [2024-07-12 20:25:56.186952] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.405 00:09:02.405 real 0m0.485s 00:09:02.405 user 0m0.251s 00:09:02.405 sys 0m0.129s 00:09:02.405 20:25:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:02.405 20:25:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.405 20:25:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:02.405 ************************************ 00:09:02.405 END TEST bdev_json_nonarray 00:09:02.405 ************************************ 00:09:02.405 20:25:56 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:02.405 20:25:56 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:02.405 00:09:02.405 real 0m34.399s 00:09:02.405 user 0m52.719s 00:09:02.405 sys 0m6.852s 00:09:02.405 20:25:56 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.405 20:25:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.405 ************************************ 00:09:02.406 END TEST blockdev_nvme 00:09:02.406 ************************************ 00:09:02.406 20:25:56 -- common/autotest_common.sh@1142 -- # return 0 00:09:02.406 20:25:56 -- spdk/autotest.sh@213 -- # uname -s 00:09:02.406 20:25:56 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:09:02.406 20:25:56 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:02.406 20:25:56 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.406 20:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.406 20:25:56 -- common/autotest_common.sh@10 -- # set +x 00:09:02.406 ************************************ 00:09:02.406 START TEST blockdev_nvme_gpt 00:09:02.406 ************************************ 00:09:02.406 20:25:56 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:02.406 * Looking for test storage... 00:09:02.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79783 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 79783 00:09:02.406 20:25:56 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 79783 ']' 00:09:02.406 20:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:02.406 20:25:56 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.406 20:25:56 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.406 20:25:56 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.406 20:25:56 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.406 20:25:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.663 [2024-07-12 20:25:56.630757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:02.663 [2024-07-12 20:25:56.630998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79783 ] 00:09:02.663 [2024-07-12 20:25:56.785149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.663 [2024-07-12 20:25:56.807715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.920 [2024-07-12 20:25:56.905536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.489 20:25:57 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.489 20:25:57 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:09:03.489 20:25:57 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:03.489 20:25:57 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:09:03.489 20:25:57 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:04.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:04.056 Waiting for block devices as requested 00:09:04.056 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.314 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.314 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.314 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.599 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:09:09.599 BYT; 00:09:09.599 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:09:09.599 BYT; 00:09:09.599 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:09.599 20:26:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:09.599 20:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:10.535 The operation has completed successfully. 00:09:10.535 20:26:04 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:11.912 The operation has completed successfully. 00:09:11.912 20:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:12.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.794 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.794 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.794 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.794 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.794 20:26:06 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:12.794 20:26:06 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.794 20:26:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:12.794 [] 00:09:12.794 20:26:06 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.794 20:26:06 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:12.794 20:26:06 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:12.794 20:26:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:12.794 20:26:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:12.794 20:26:06 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:12.794 20:26:06 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.794 20:26:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:13.361 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:13.361 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:13.362 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "cfe1fb15-dc23-4285-b24b-a6c5d757619f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cfe1fb15-dc23-4285-b24b-a6c5d757619f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b80a57b4-851b-4400-9dc2-46c21e0bdbdd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b80a57b4-851b-4400-9dc2-46c21e0bdbdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ab4842af-8baa-45a5-b780-529d1c43e940"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ab4842af-8baa-45a5-b780-529d1c43e940",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "36af3709-9db0-4994-8bb0-6d439cb2c71f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "36af3709-9db0-4994-8bb0-6d439cb2c71f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2bcec3d5-ed14-4d8c-9a3c-d71052aceb12"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2bcec3d5-ed14-4d8c-9a3c-d71052aceb12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:13.362 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:13.362 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:13.362 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:13.362 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 79783 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 79783 ']' 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 79783 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79783 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79783' 00:09:13.362 killing process with pid 79783 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 79783 00:09:13.362 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 79783 00:09:13.929 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:13.929 20:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:13.929 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:13.929 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.929 20:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.929 ************************************ 00:09:13.929 START TEST bdev_hello_world 00:09:13.929 ************************************ 00:09:13.929 20:26:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:13.929 [2024-07-12 20:26:08.025535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:13.929 [2024-07-12 20:26:08.025751] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80396 ] 00:09:14.188 [2024-07-12 20:26:08.178213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:14.188 [2024-07-12 20:26:08.200408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.188 [2024-07-12 20:26:08.299206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.756 [2024-07-12 20:26:08.725497] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:14.756 [2024-07-12 20:26:08.725566] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:14.756 [2024-07-12 20:26:08.725602] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:14.756 [2024-07-12 20:26:08.731072] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:14.756 [2024-07-12 20:26:08.731763] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:14.756 [2024-07-12 20:26:08.731816] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:14.756 [2024-07-12 20:26:08.732062] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:14.756 00:09:14.756 [2024-07-12 20:26:08.732115] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:15.015 00:09:15.015 real 0m1.069s 00:09:15.015 user 0m0.691s 00:09:15.015 sys 0m0.271s 00:09:15.015 20:26:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.015 20:26:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:15.015 ************************************ 00:09:15.015 END TEST bdev_hello_world 00:09:15.015 ************************************ 00:09:15.015 20:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:15.015 20:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:15.015 20:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.015 20:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.015 20:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.015 ************************************ 00:09:15.015 START TEST bdev_bounds 00:09:15.015 ************************************ 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=80427 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.015 Process bdevio pid: 80427 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 80427' 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 80427 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 80427 ']' 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.015 20:26:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:15.015 [2024-07-12 20:26:09.152485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:15.015 [2024-07-12 20:26:09.152638] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80427 ] 00:09:15.274 [2024-07-12 20:26:09.300888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:15.274 [2024-07-12 20:26:09.322367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:15.533 [2024-07-12 20:26:09.426580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.533 [2024-07-12 20:26:09.426683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.533 [2024-07-12 20:26:09.426789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.099 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.099 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:16.099 20:26:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:16.358 I/O targets: 00:09:16.358 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:16.358 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:16.358 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:16.358 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:16.358 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:16.358 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:16.358 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:16.358 00:09:16.358 00:09:16.358 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.358 http://cunit.sourceforge.net/ 00:09:16.358 00:09:16.358 00:09:16.358 Suite: bdevio tests on: Nvme3n1 00:09:16.358 Test: blockdev write read block ...passed 00:09:16.358 Test: blockdev write zeroes read block ...passed 00:09:16.358 Test: blockdev write zeroes read no split ...passed 00:09:16.358 Test: blockdev write zeroes read split ...passed 00:09:16.358 Test: blockdev write zeroes read split partial ...passed 00:09:16.358 Test: blockdev reset ...[2024-07-12 20:26:10.325782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:16.358 [2024-07-12 20:26:10.328294] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.358 passed 00:09:16.358 Test: blockdev write read 8 blocks ...passed 00:09:16.358 Test: blockdev write read size > 128k ...passed 00:09:16.358 Test: blockdev write read invalid size ...passed 00:09:16.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:16.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:16.358 Test: blockdev write read max offset ...passed 00:09:16.358 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:16.358 Test: blockdev writev readv 8 blocks ...passed 00:09:16.358 Test: blockdev writev readv 30 x 1block ...passed 00:09:16.358 Test: blockdev writev readv block ...passed 00:09:16.358 Test: blockdev writev readv size > 128k ...passed 00:09:16.358 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:16.358 Test: blockdev comparev and writev ...[2024-07-12 20:26:10.335728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dde3d000 len:0x1000 00:09:16.358 passed 00:09:16.358 Test: blockdev nvme passthru rw ...[2024-07-12 20:26:10.335815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:16.358 passed 00:09:16.358 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:26:10.336741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:16.358 passed 00:09:16.358 Test: blockdev nvme admin passthru ...[2024-07-12 20:26:10.336811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:16.358 passed 00:09:16.358 Test: blockdev copy ...passed 00:09:16.358 Suite: bdevio tests on: Nvme2n3 00:09:16.358 Test: blockdev write read block ...passed 00:09:16.358 Test: blockdev write zeroes read block ...passed 00:09:16.358 Test: blockdev write zeroes read no split ...passed 00:09:16.358 Test: blockdev write zeroes read split ...passed 00:09:16.358 Test: blockdev write zeroes read split partial ...passed 00:09:16.358 Test: blockdev reset ...[2024-07-12 20:26:10.361965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:16.358 passed 00:09:16.358 Test: blockdev write read 8 blocks ...[2024-07-12 20:26:10.364635] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.358 passed 00:09:16.358 Test: blockdev write read size > 128k ...passed 00:09:16.358 Test: blockdev write read invalid size ...passed 00:09:16.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:16.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:16.358 Test: blockdev write read max offset ...passed 00:09:16.358 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:16.358 Test: blockdev writev readv 8 blocks ...passed 00:09:16.358 Test: blockdev writev readv 30 x 1block ...passed 00:09:16.358 Test: blockdev writev readv block ...passed 00:09:16.358 Test: blockdev writev readv size > 128k ...passed 00:09:16.358 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:16.358 Test: blockdev comparev and writev ...[2024-07-12 20:26:10.372009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dde39000 len:0x1000 00:09:16.358 [2024-07-12 20:26:10.372089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:16.358 passed 00:09:16.358 Test: blockdev nvme passthru rw ...passed 00:09:16.358 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:26:10.373105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:16.358 [2024-07-12 20:26:10.373160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:16.358 passed 00:09:16.358 Test: blockdev nvme admin passthru ...passed 00:09:16.358 Test: blockdev copy ...passed 00:09:16.358 Suite: bdevio tests on: Nvme2n2 00:09:16.358 Test: blockdev write read block ...passed 00:09:16.358 Test: blockdev write zeroes read block ...passed 00:09:16.358 Test: blockdev write zeroes read no split ...passed 00:09:16.358 Test: blockdev write zeroes read split ...passed 00:09:16.358 Test: blockdev write zeroes read split partial ...passed 00:09:16.358 Test: blockdev reset ...[2024-07-12 20:26:10.397385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:16.358 passed 00:09:16.358 Test: blockdev write read 8 blocks ...[2024-07-12 20:26:10.400135] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.358 passed 00:09:16.358 Test: blockdev write read size > 128k ...passed 00:09:16.358 Test: blockdev write read invalid size ...passed 00:09:16.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:16.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:16.358 Test: blockdev write read max offset ...passed 00:09:16.358 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:16.358 Test: blockdev writev readv 8 blocks ...passed 00:09:16.358 Test: blockdev writev readv 30 x 1block ...passed 00:09:16.358 Test: blockdev writev readv block ...passed 00:09:16.358 Test: blockdev writev readv size > 128k ...passed 00:09:16.358 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:16.358 Test: blockdev comparev and writev ...[2024-07-12 20:26:10.406896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dde35000 len:0x1000 00:09:16.358 [2024-07-12 20:26:10.406969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:16.358 passed 00:09:16.358 Test: blockdev nvme passthru rw ...passed 00:09:16.358 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:26:10.408014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:16.358 [2024-07-12 20:26:10.408084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:16.359 passed 00:09:16.359 Test: blockdev nvme admin passthru ...passed 00:09:16.359 Test: blockdev copy ...passed 00:09:16.359 Suite: bdevio tests on: Nvme2n1 00:09:16.359 Test: blockdev write read block ...passed 00:09:16.359 Test: blockdev write zeroes read block ...passed 00:09:16.359 Test: blockdev write zeroes read no split ...passed 00:09:16.359 Test: blockdev write zeroes read split ...passed 00:09:16.359 Test: blockdev write zeroes read split partial ...passed 00:09:16.359 Test: blockdev reset ...[2024-07-12 20:26:10.435475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:16.359 [2024-07-12 20:26:10.438366] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.359 passed 00:09:16.359 Test: blockdev write read 8 blocks ...passed 00:09:16.359 Test: blockdev write read size > 128k ...passed 00:09:16.359 Test: blockdev write read invalid size ...passed 00:09:16.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:16.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:16.359 Test: blockdev write read max offset ...passed 00:09:16.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:16.359 Test: blockdev writev readv 8 blocks ...passed 00:09:16.359 Test: blockdev writev readv 30 x 1block ...passed 00:09:16.359 Test: blockdev writev readv block ...passed 00:09:16.359 Test: blockdev writev readv size > 128k ...passed 00:09:16.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:16.359 Test: blockdev comparev and writev ...[2024-07-12 20:26:10.447100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dde2f000 len:0x1000 00:09:16.359 [2024-07-12 20:26:10.447177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:16.359 passed 00:09:16.359 Test: blockdev nvme passthru rw ...passed 00:09:16.359 Test: blockdev nvme passthru vendor specific ...passed 00:09:16.359 Test: blockdev nvme admin passthru ...[2024-07-12 20:26:10.448062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:16.359 [2024-07-12 20:26:10.448129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:16.359 passed 00:09:16.359 Test: blockdev copy ...passed 00:09:16.359 Suite: bdevio tests on: Nvme1n1 00:09:16.359 Test: blockdev write read block ...passed 00:09:16.359 Test: blockdev write zeroes read block ...passed 00:09:16.359 Test: blockdev write zeroes read no split ...passed 00:09:16.359 Test: blockdev write zeroes read split ...passed 00:09:16.359 Test: blockdev write zeroes read split partial ...passed 00:09:16.359 Test: blockdev reset ...[2024-07-12 20:26:10.470929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:16.359 passed 00:09:16.359 Test: blockdev write read 8 blocks ...[2024-07-12 20:26:10.473421] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.359 passed 00:09:16.359 Test: blockdev write read size > 128k ...passed 00:09:16.359 Test: blockdev write read invalid size ...passed 00:09:16.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:16.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:16.359 Test: blockdev write read max offset ...passed 00:09:16.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:16.359 Test: blockdev writev readv 8 blocks ...passed 00:09:16.359 Test: blockdev writev readv 30 x 1block ...passed 00:09:16.359 Test: blockdev writev readv block ...passed 00:09:16.359 Test: blockdev writev readv size > 128k ...passed 00:09:16.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:16.359 Test: blockdev comparev and writev ...[2024-07-12 20:26:10.480357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf60e000 len:0x1000 00:09:16.359 [2024-07-12 20:26:10.480426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:16.359 passed 00:09:16.359 Test: blockdev nvme passthru rw ...passed 00:09:16.359 Test: blockdev nvme passthru vendor specific ...[2024-07-12 20:26:10.481615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:16.359 [2024-07-12 20:26:10.481673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:16.359 passed 00:09:16.359 Test: blockdev nvme admin passthru ...passed 00:09:16.359 Test: blockdev copy ...passed 00:09:16.359 Suite: bdevio tests on: Nvme0n1p2 00:09:16.359 Test: blockdev write read block ...passed 00:09:16.359 Test: blockdev write zeroes read block ...passed 00:09:16.359 Test: blockdev write zeroes read no split ...passed 00:09:16.359 Test: blockdev write zeroes read split ...passed 00:09:16.621 Test: blockdev write zeroes read split partial ...passed 00:09:16.621 Test: blockdev reset ...[2024-07-12 20:26:10.512552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:16.621 [2024-07-12 20:26:10.515004] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.621 passed 00:09:16.621 Test: blockdev write read 8 blocks ...passed 00:09:16.621 Test: blockdev write read size > 128k ...passed 00:09:16.621 Test: blockdev write read invalid size ...passed 00:09:16.621 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:16.621 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:16.621 Test: blockdev write read max offset ...passed 00:09:16.621 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:16.621 Test: blockdev writev readv 8 blocks ...passed 00:09:16.621 Test: blockdev writev readv 30 x 1block ...passed 00:09:16.621 Test: blockdev writev readv block ...passed 00:09:16.621 Test: blockdev writev readv size > 128k ...passed 00:09:16.621 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:16.621 Test: blockdev comparev and writev ...passed 00:09:16.621 Test: blockdev nvme passthru rw ...passed 00:09:16.621 Test: blockdev nvme passthru vendor specific ...passed 00:09:16.621 Test: blockdev nvme admin passthru ...passed 00:09:16.621 Test: blockdev copy ...[2024-07-12 20:26:10.522094] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:16.621 separate metadata which is not supported yet. 00:09:16.621 passed 00:09:16.621 Suite: bdevio tests on: Nvme0n1p1 00:09:16.621 Test: blockdev write read block ...passed 00:09:16.621 Test: blockdev write zeroes read block ...passed 00:09:16.621 Test: blockdev write zeroes read no split ...passed 00:09:16.621 Test: blockdev write zeroes read split ...passed 00:09:16.621 Test: blockdev write zeroes read split partial ...passed 00:09:16.621 Test: blockdev reset ...[2024-07-12 20:26:10.540355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:16.621 [2024-07-12 20:26:10.542956] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:16.621 passed 00:09:16.621 Test: blockdev write read 8 blocks ...passed 00:09:16.621 Test: blockdev write read size > 128k ...passed 00:09:16.621 Test: blockdev write read invalid size ...passed 00:09:16.621 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:16.621 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:16.621 Test: blockdev write read max offset ...passed 00:09:16.621 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:16.621 Test: blockdev writev readv 8 blocks ...passed 00:09:16.621 Test: blockdev writev readv 30 x 1block ...passed 00:09:16.621 Test: blockdev writev readv block ...passed 00:09:16.621 Test: blockdev writev readv size > 128k ...passed 00:09:16.621 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:16.621 Test: blockdev comparev and writev ...passed 00:09:16.621 Test: blockdev nvme passthru rw ...passed 00:09:16.621 Test: blockdev nvme passthru vendor specific ...passed 00:09:16.621 Test: blockdev nvme admin passthru ...passed 00:09:16.621 Test: blockdev copy ...[2024-07-12 20:26:10.549747] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:16.621 separate metadata which is not supported yet. 00:09:16.621 passed 00:09:16.621 00:09:16.621 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.621 suites 7 7 n/a 0 0 00:09:16.621 tests 161 161 161 0 0 00:09:16.621 asserts 1006 1006 1006 0 n/a 00:09:16.621 00:09:16.621 Elapsed time = 0.552 seconds 00:09:16.621 0 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 80427 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 80427 ']' 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 80427 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80427 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80427' 00:09:16.621 killing process with pid 80427 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 80427 00:09:16.621 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 80427 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:16.898 00:09:16.898 real 0m1.751s 00:09:16.898 user 0m4.327s 00:09:16.898 sys 0m0.421s 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:16.898 ************************************ 00:09:16.898 END TEST bdev_bounds 00:09:16.898 ************************************ 00:09:16.898 20:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:16.898 20:26:10 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:16.898 20:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:16.898 20:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.898 20:26:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:16.898 ************************************ 00:09:16.898 START TEST bdev_nbd 00:09:16.898 ************************************ 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=80476 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 80476 /var/tmp/spdk-nbd.sock 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 80476 ']' 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:16.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.898 20:26:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:16.898 [2024-07-12 20:26:10.936142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:16.898 [2024-07-12 20:26:10.936694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.162 [2024-07-12 20:26:11.081692] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:17.162 [2024-07-12 20:26:11.103744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.162 [2024-07-12 20:26:11.195852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.729 20:26:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.296 1+0 records in 00:09:18.296 1+0 records out 00:09:18.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639819 s, 6.4 MB/s 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:18.296 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.578 1+0 records in 00:09:18.578 1+0 records out 00:09:18.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708176 s, 5.8 MB/s 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:18.578 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:18.849 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.850 1+0 records in 00:09:18.850 1+0 records out 00:09:18.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046275 s, 8.9 MB/s 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:18.850 20:26:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:19.113 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:19.113 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:19.113 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:19.113 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.114 1+0 records in 00:09:19.114 1+0 records out 00:09:19.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603207 s, 6.8 MB/s 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:19.114 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.372 1+0 records in 00:09:19.372 1+0 records out 00:09:19.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663347 s, 6.2 MB/s 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:19.372 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.630 1+0 records in 00:09:19.630 1+0 records out 00:09:19.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658279 s, 6.2 MB/s 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:19.630 20:26:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.196 1+0 records in 00:09:20.196 1+0 records out 00:09:20.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736827 s, 5.6 MB/s 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:20.196 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd0", 00:09:20.455 "bdev_name": "Nvme0n1p1" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd1", 00:09:20.455 "bdev_name": "Nvme0n1p2" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd2", 00:09:20.455 "bdev_name": "Nvme1n1" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd3", 00:09:20.455 "bdev_name": "Nvme2n1" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd4", 00:09:20.455 "bdev_name": "Nvme2n2" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd5", 00:09:20.455 "bdev_name": "Nvme2n3" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd6", 00:09:20.455 "bdev_name": "Nvme3n1" 00:09:20.455 } 00:09:20.455 ]' 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd0", 00:09:20.455 "bdev_name": "Nvme0n1p1" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd1", 00:09:20.455 "bdev_name": "Nvme0n1p2" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd2", 00:09:20.455 "bdev_name": "Nvme1n1" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd3", 00:09:20.455 "bdev_name": "Nvme2n1" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd4", 00:09:20.455 "bdev_name": "Nvme2n2" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd5", 00:09:20.455 "bdev_name": "Nvme2n3" 00:09:20.455 }, 00:09:20.455 { 00:09:20.455 "nbd_device": "/dev/nbd6", 00:09:20.455 "bdev_name": "Nvme3n1" 00:09:20.455 } 00:09:20.455 ]' 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.455 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.713 20:26:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.973 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.539 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.797 20:26:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.055 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.621 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:22.880 20:26:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:23.139 /dev/nbd0 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:23.139 1+0 records in 00:09:23.139 1+0 records out 00:09:23.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475457 s, 8.6 MB/s 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:23.139 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:23.398 /dev/nbd1 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:23.398 1+0 records in 00:09:23.398 1+0 records out 00:09:23.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052904 s, 7.7 MB/s 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:23.398 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:23.656 /dev/nbd10 00:09:23.656 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:23.656 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:23.656 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:23.656 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:23.656 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:23.656 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:23.656 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:23.915 1+0 records in 00:09:23.915 1+0 records out 00:09:23.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759243 s, 5.4 MB/s 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:23.915 20:26:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:24.172 /dev/nbd11 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.172 1+0 records in 00:09:24.172 1+0 records out 00:09:24.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742982 s, 5.5 MB/s 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:24.172 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:24.430 /dev/nbd12 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.430 1+0 records in 00:09:24.430 1+0 records out 00:09:24.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855682 s, 4.8 MB/s 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:24.430 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:24.688 /dev/nbd13 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.688 1+0 records in 00:09:24.688 1+0 records out 00:09:24.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793242 s, 5.2 MB/s 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:24.688 20:26:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:24.946 /dev/nbd14 00:09:24.946 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:24.946 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:24.946 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:24.946 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:24.946 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.947 1+0 records in 00:09:24.947 1+0 records out 00:09:24.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000863648 s, 4.7 MB/s 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.947 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd0", 00:09:25.513 "bdev_name": "Nvme0n1p1" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd1", 00:09:25.513 "bdev_name": "Nvme0n1p2" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd10", 00:09:25.513 "bdev_name": "Nvme1n1" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd11", 00:09:25.513 "bdev_name": "Nvme2n1" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd12", 00:09:25.513 "bdev_name": "Nvme2n2" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd13", 00:09:25.513 "bdev_name": "Nvme2n3" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd14", 00:09:25.513 "bdev_name": "Nvme3n1" 00:09:25.513 } 00:09:25.513 ]' 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd0", 00:09:25.513 "bdev_name": "Nvme0n1p1" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd1", 00:09:25.513 "bdev_name": "Nvme0n1p2" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd10", 00:09:25.513 "bdev_name": "Nvme1n1" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd11", 00:09:25.513 "bdev_name": "Nvme2n1" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd12", 00:09:25.513 "bdev_name": "Nvme2n2" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd13", 00:09:25.513 "bdev_name": "Nvme2n3" 00:09:25.513 }, 00:09:25.513 { 00:09:25.513 "nbd_device": "/dev/nbd14", 00:09:25.513 "bdev_name": "Nvme3n1" 00:09:25.513 } 00:09:25.513 ]' 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:25.513 /dev/nbd1 00:09:25.513 /dev/nbd10 00:09:25.513 /dev/nbd11 00:09:25.513 /dev/nbd12 00:09:25.513 /dev/nbd13 00:09:25.513 /dev/nbd14' 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:25.513 /dev/nbd1 00:09:25.513 /dev/nbd10 00:09:25.513 /dev/nbd11 00:09:25.513 /dev/nbd12 00:09:25.513 /dev/nbd13 00:09:25.513 /dev/nbd14' 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:25.513 256+0 records in 00:09:25.513 256+0 records out 00:09:25.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0072944 s, 144 MB/s 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:25.513 256+0 records in 00:09:25.513 256+0 records out 00:09:25.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145633 s, 7.2 MB/s 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:25.513 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:25.772 256+0 records in 00:09:25.772 256+0 records out 00:09:25.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142061 s, 7.4 MB/s 00:09:25.772 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:25.772 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:25.772 256+0 records in 00:09:25.772 256+0 records out 00:09:25.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128645 s, 8.2 MB/s 00:09:25.772 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:25.772 20:26:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:26.073 256+0 records in 00:09:26.073 256+0 records out 00:09:26.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161701 s, 6.5 MB/s 00:09:26.073 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.073 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:26.073 256+0 records in 00:09:26.073 256+0 records out 00:09:26.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152339 s, 6.9 MB/s 00:09:26.073 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.073 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:26.331 256+0 records in 00:09:26.331 256+0 records out 00:09:26.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142433 s, 7.4 MB/s 00:09:26.331 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:26.332 256+0 records in 00:09:26.332 256+0 records out 00:09:26.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132498 s, 7.9 MB/s 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.332 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.590 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.849 20:26:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.107 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.366 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.625 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:27.884 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:27.884 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:27.884 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:27.884 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.884 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.884 20:26:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:27.884 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.884 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.884 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.884 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.451 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.452 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:29.017 20:26:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:29.017 malloc_lvol_verify 00:09:29.275 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:29.533 4a1d9c8c-cded-4df9-9734-33f3971b4c39 00:09:29.533 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:29.791 b0927fb7-f5bc-435b-8ec5-9c18ba15e900 00:09:29.791 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:30.050 /dev/nbd0 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:30.050 mke2fs 1.46.5 (30-Dec-2021) 00:09:30.050 Discarding device blocks: 0/4096 done 00:09:30.050 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:30.050 00:09:30.050 Allocating group tables: 0/1 done 00:09:30.050 Writing inode tables: 0/1 done 00:09:30.050 Creating journal (1024 blocks): done 00:09:30.050 Writing superblocks and filesystem accounting information: 0/1 done 00:09:30.050 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.050 20:26:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:30.308 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:30.308 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:30.308 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:30.308 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.308 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.308 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 80476 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 80476 ']' 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 80476 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80476 00:09:30.309 killing process with pid 80476 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80476' 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 80476 00:09:30.309 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 80476 00:09:30.567 ************************************ 00:09:30.567 END TEST bdev_nbd 00:09:30.567 ************************************ 00:09:30.567 20:26:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:30.567 00:09:30.567 real 0m13.687s 00:09:30.567 user 0m19.969s 00:09:30.567 sys 0m4.833s 00:09:30.567 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.567 20:26:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:30.567 20:26:24 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:30.567 20:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:30.567 20:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:09:30.567 20:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:09:30.567 skipping fio tests on NVMe due to multi-ns failures. 00:09:30.567 20:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:30.567 20:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:30.567 20:26:24 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:30.567 20:26:24 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:30.567 20:26:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.567 20:26:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.568 ************************************ 00:09:30.568 START TEST bdev_verify 00:09:30.568 ************************************ 00:09:30.568 20:26:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:30.568 [2024-07-12 20:26:24.692575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:30.568 [2024-07-12 20:26:24.692820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80920 ] 00:09:30.902 [2024-07-12 20:26:24.846879] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:30.902 [2024-07-12 20:26:24.869157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.902 [2024-07-12 20:26:24.968960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.902 [2024-07-12 20:26:24.969005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.468 Running I/O for 5 seconds... 00:09:36.738 00:09:36.738 Latency(us) 00:09:36.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.738 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x0 length 0x5e800 00:09:36.738 Nvme0n1p1 : 5.09 1333.41 5.21 0.00 0.00 95793.50 14894.55 82456.20 00:09:36.738 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x5e800 length 0x5e800 00:09:36.738 Nvme0n1p1 : 5.08 1210.08 4.73 0.00 0.00 105532.25 20494.89 102474.47 00:09:36.738 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x0 length 0x5e7ff 00:09:36.738 Nvme0n1p2 : 5.09 1332.51 5.21 0.00 0.00 95636.77 16324.42 80073.08 00:09:36.738 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:36.738 Nvme0n1p2 : 5.08 1209.54 4.72 0.00 0.00 105406.24 20375.74 95801.72 00:09:36.738 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x0 length 0xa0000 00:09:36.738 Nvme1n1 : 5.09 1331.74 5.20 0.00 0.00 95493.78 17992.61 76260.07 00:09:36.738 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0xa0000 length 0xa0000 00:09:36.738 Nvme1n1 : 5.08 1209.07 4.72 0.00 0.00 105271.55 20375.74 92465.34 00:09:36.738 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x0 length 0x80000 00:09:36.738 Nvme2n1 : 5.10 1331.31 5.20 0.00 0.00 95344.39 18111.77 73876.95 00:09:36.738 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x80000 length 0x80000 00:09:36.738 Nvme2n1 : 5.08 1208.56 4.72 0.00 0.00 105134.51 20018.27 93895.21 00:09:36.738 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x0 length 0x80000 00:09:36.738 Nvme2n2 : 5.10 1330.90 5.20 0.00 0.00 95199.44 18350.08 76260.07 00:09:36.738 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x80000 length 0x80000 00:09:36.738 Nvme2n2 : 5.09 1208.06 4.72 0.00 0.00 104992.48 20018.27 96754.97 00:09:36.738 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x0 length 0x80000 00:09:36.738 Nvme2n3 : 5.10 1330.50 5.20 0.00 0.00 95052.12 17873.45 79119.83 00:09:36.738 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x80000 length 0x80000 00:09:36.738 Nvme2n3 : 5.09 1207.59 4.72 0.00 0.00 104848.95 19660.80 98661.47 00:09:36.738 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x0 length 0x20000 00:09:36.738 Nvme3n1 : 5.10 1330.08 5.20 0.00 0.00 94925.20 14477.50 82456.20 00:09:36.738 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:36.738 Verification LBA range: start 0x20000 length 0x20000 00:09:36.738 Nvme3n1 : 5.09 1206.74 4.71 0.00 0.00 104716.42 12809.31 101521.22 00:09:36.738 =================================================================================================================== 00:09:36.738 Total : 17780.09 69.45 0.00 0.00 99997.05 12809.31 102474.47 00:09:36.995 00:09:36.995 real 0m6.344s 00:09:36.995 user 0m11.659s 00:09:36.995 sys 0m0.309s 00:09:36.995 20:26:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.995 20:26:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:36.995 ************************************ 00:09:36.995 END TEST bdev_verify 00:09:36.995 ************************************ 00:09:36.995 20:26:30 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:36.995 20:26:30 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:36.995 20:26:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:36.995 20:26:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.995 20:26:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.995 ************************************ 00:09:36.995 START TEST bdev_verify_big_io 00:09:36.995 ************************************ 00:09:36.995 20:26:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:36.995 [2024-07-12 20:26:31.085054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:36.995 [2024-07-12 20:26:31.085268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81007 ] 00:09:37.253 [2024-07-12 20:26:31.237844] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:37.253 [2024-07-12 20:26:31.260028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.253 [2024-07-12 20:26:31.358498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.253 [2024-07-12 20:26:31.358568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.816 Running I/O for 5 seconds... 00:09:44.372 00:09:44.372 Latency(us) 00:09:44.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.372 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x0 length 0x5e80 00:09:44.372 Nvme0n1p1 : 5.83 111.14 6.95 0.00 0.00 1098790.48 28240.06 1166779.11 00:09:44.372 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x5e80 length 0x5e80 00:09:44.372 Nvme0n1p1 : 5.95 103.29 6.46 0.00 0.00 1193364.83 25141.99 1212535.16 00:09:44.372 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x0 length 0x5e7f 00:09:44.372 Nvme0n1p2 : 5.83 106.63 6.66 0.00 0.00 1111996.69 101044.60 1212535.16 00:09:44.372 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:44.372 Nvme0n1p2 : 5.96 102.09 6.38 0.00 0.00 1165871.33 91512.09 1143901.09 00:09:44.372 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x0 length 0xa000 00:09:44.372 Nvme1n1 : 5.84 113.61 7.10 0.00 0.00 1033678.76 98661.47 1128649.08 00:09:44.372 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0xa000 length 0xa000 00:09:44.372 Nvme1n1 : 5.98 100.44 6.28 0.00 0.00 1157383.60 37176.79 2226794.12 00:09:44.372 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x0 length 0x8000 00:09:44.372 Nvme2n1 : 5.91 119.27 7.45 0.00 0.00 969660.68 37653.41 1212535.16 00:09:44.372 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x8000 length 0x8000 00:09:44.372 Nvme2n1 : 5.99 105.43 6.59 0.00 0.00 1066243.06 37176.79 1601461.53 00:09:44.372 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x0 length 0x8000 00:09:44.372 Nvme2n2 : 5.91 119.25 7.45 0.00 0.00 940865.24 37176.79 1105771.05 00:09:44.372 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x8000 length 0x8000 00:09:44.372 Nvme2n2 : 6.00 110.68 6.92 0.00 0.00 987578.17 25618.62 1624339.55 00:09:44.372 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x0 length 0x8000 00:09:44.372 Nvme2n3 : 5.92 125.88 7.87 0.00 0.00 874213.61 32648.84 964689.92 00:09:44.372 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x8000 length 0x8000 00:09:44.372 Nvme2n3 : 6.01 114.05 7.13 0.00 0.00 931469.70 12571.00 2089525.99 00:09:44.372 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x0 length 0x2000 00:09:44.372 Nvme3n1 : 5.92 133.34 8.33 0.00 0.00 802879.46 3232.12 976128.93 00:09:44.372 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:44.372 Verification LBA range: start 0x2000 length 0x2000 00:09:44.372 Nvme3n1 : 6.09 143.56 8.97 0.00 0.00 727739.62 983.04 2425070.31 00:09:44.372 =================================================================================================================== 00:09:44.372 Total : 1608.64 100.54 0.00 0.00 990209.53 983.04 2425070.31 00:09:44.630 ************************************ 00:09:44.630 END TEST bdev_verify_big_io 00:09:44.630 ************************************ 00:09:44.630 00:09:44.630 real 0m7.654s 00:09:44.630 user 0m14.165s 00:09:44.630 sys 0m0.349s 00:09:44.630 20:26:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.630 20:26:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:44.630 20:26:38 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:44.630 20:26:38 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:44.630 20:26:38 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:44.630 20:26:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.630 20:26:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.630 ************************************ 00:09:44.630 START TEST bdev_write_zeroes 00:09:44.630 ************************************ 00:09:44.630 20:26:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:44.889 [2024-07-12 20:26:38.791650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:44.889 [2024-07-12 20:26:38.791867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81105 ] 00:09:44.889 [2024-07-12 20:26:38.947896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:44.889 [2024-07-12 20:26:38.969157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.147 [2024-07-12 20:26:39.070011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.405 Running I/O for 1 seconds... 00:09:46.778 00:09:46.779 Latency(us) 00:09:46.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.779 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.779 Nvme0n1p1 : 1.02 5936.01 23.19 0.00 0.00 21491.89 13524.25 32172.22 00:09:46.779 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.779 Nvme0n1p2 : 1.03 5925.91 23.15 0.00 0.00 21482.14 13643.40 32887.16 00:09:46.779 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.779 Nvme1n1 : 1.03 5916.70 23.11 0.00 0.00 21413.30 13941.29 29074.15 00:09:46.779 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.779 Nvme2n1 : 1.03 5908.59 23.08 0.00 0.00 21391.48 13524.25 29193.31 00:09:46.779 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.779 Nvme2n2 : 1.03 5901.49 23.05 0.00 0.00 21364.35 13464.67 29193.31 00:09:46.779 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.779 Nvme2n3 : 1.03 5894.42 23.03 0.00 0.00 21338.24 13047.62 28716.68 00:09:46.779 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.779 Nvme3n1 : 1.03 5887.21 23.00 0.00 0.00 21314.06 13285.93 28240.06 00:09:46.779 =================================================================================================================== 00:09:46.779 Total : 41370.34 161.60 0.00 0.00 21399.35 13047.62 32887.16 00:09:46.779 00:09:46.779 real 0m2.151s 00:09:46.779 user 0m1.752s 00:09:46.779 sys 0m0.278s 00:09:46.779 20:26:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.779 20:26:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:46.779 ************************************ 00:09:46.779 END TEST bdev_write_zeroes 00:09:46.779 ************************************ 00:09:46.779 20:26:40 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:46.779 20:26:40 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:46.779 20:26:40 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:46.779 20:26:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.779 20:26:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:46.779 ************************************ 00:09:46.779 START TEST bdev_json_nonenclosed 00:09:46.779 ************************************ 00:09:46.779 20:26:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:47.038 [2024-07-12 20:26:40.996275] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:47.038 [2024-07-12 20:26:40.996502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81153 ] 00:09:47.038 [2024-07-12 20:26:41.148377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:47.038 [2024-07-12 20:26:41.168666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.302 [2024-07-12 20:26:41.271446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.302 [2024-07-12 20:26:41.271606] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:47.302 [2024-07-12 20:26:41.271644] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:47.302 [2024-07-12 20:26:41.271666] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.302 00:09:47.302 real 0m0.518s 00:09:47.302 user 0m0.259s 00:09:47.302 sys 0m0.153s 00:09:47.302 20:26:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:47.302 20:26:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.302 ************************************ 00:09:47.302 20:26:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:47.302 END TEST bdev_json_nonenclosed 00:09:47.302 ************************************ 00:09:47.559 20:26:41 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:47.559 20:26:41 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:09:47.559 20:26:41 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:47.559 20:26:41 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:47.559 20:26:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.559 20:26:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:47.559 ************************************ 00:09:47.559 START TEST bdev_json_nonarray 00:09:47.559 ************************************ 00:09:47.559 20:26:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:47.559 [2024-07-12 20:26:41.569200] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:47.559 [2024-07-12 20:26:41.569431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81178 ] 00:09:47.818 [2024-07-12 20:26:41.722713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:47.818 [2024-07-12 20:26:41.746842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.818 [2024-07-12 20:26:41.848052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.818 [2024-07-12 20:26:41.848197] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:47.818 [2024-07-12 20:26:41.848252] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:47.818 [2024-07-12 20:26:41.848278] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:48.076 00:09:48.076 real 0m0.516s 00:09:48.076 user 0m0.268s 00:09:48.076 sys 0m0.142s 00:09:48.076 20:26:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:48.076 20:26:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.076 20:26:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:48.076 ************************************ 00:09:48.076 END TEST bdev_json_nonarray 00:09:48.076 ************************************ 00:09:48.076 20:26:42 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:48.076 20:26:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:09:48.076 20:26:42 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:48.076 20:26:42 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:48.076 20:26:42 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:48.076 20:26:42 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.076 20:26:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.076 20:26:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.076 ************************************ 00:09:48.076 START TEST bdev_gpt_uuid 00:09:48.076 ************************************ 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81208 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 81208 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 81208 ']' 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.076 20:26:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:48.076 [2024-07-12 20:26:42.152840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:48.076 [2024-07-12 20:26:42.153030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81208 ] 00:09:48.334 [2024-07-12 20:26:42.306124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:48.334 [2024-07-12 20:26:42.330033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.334 [2024-07-12 20:26:42.431131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.271 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.271 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:09:49.271 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:49.271 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.271 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.530 Some configs were skipped because the RPC state that can call them passed over. 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:49.530 { 00:09:49.530 "name": "Nvme0n1p1", 00:09:49.530 "aliases": [ 00:09:49.530 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:49.530 ], 00:09:49.530 "product_name": "GPT Disk", 00:09:49.530 "block_size": 4096, 00:09:49.530 "num_blocks": 774144, 00:09:49.530 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:49.530 "md_size": 64, 00:09:49.530 "md_interleave": false, 00:09:49.530 "dif_type": 0, 00:09:49.530 "assigned_rate_limits": { 00:09:49.530 "rw_ios_per_sec": 0, 00:09:49.530 "rw_mbytes_per_sec": 0, 00:09:49.530 "r_mbytes_per_sec": 0, 00:09:49.530 "w_mbytes_per_sec": 0 00:09:49.530 }, 00:09:49.530 "claimed": false, 00:09:49.530 "zoned": false, 00:09:49.530 "supported_io_types": { 00:09:49.530 "read": true, 00:09:49.530 "write": true, 00:09:49.530 "unmap": true, 00:09:49.530 "flush": true, 00:09:49.530 "reset": true, 00:09:49.530 "nvme_admin": false, 00:09:49.530 "nvme_io": false, 00:09:49.530 "nvme_io_md": false, 00:09:49.530 "write_zeroes": true, 00:09:49.530 "zcopy": false, 00:09:49.530 "get_zone_info": false, 00:09:49.530 "zone_management": false, 00:09:49.530 "zone_append": false, 00:09:49.530 "compare": true, 00:09:49.530 "compare_and_write": false, 00:09:49.530 "abort": true, 00:09:49.530 "seek_hole": false, 00:09:49.530 "seek_data": false, 00:09:49.530 "copy": true, 00:09:49.530 "nvme_iov_md": false 00:09:49.530 }, 00:09:49.530 "driver_specific": { 00:09:49.530 "gpt": { 00:09:49.530 "base_bdev": "Nvme0n1", 00:09:49.530 "offset_blocks": 256, 00:09:49.530 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:49.530 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:49.530 "partition_name": "SPDK_TEST_first" 00:09:49.530 } 00:09:49.530 } 00:09:49.530 } 00:09:49.530 ]' 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:49.530 { 00:09:49.530 "name": "Nvme0n1p2", 00:09:49.530 "aliases": [ 00:09:49.530 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:49.530 ], 00:09:49.530 "product_name": "GPT Disk", 00:09:49.530 "block_size": 4096, 00:09:49.530 "num_blocks": 774143, 00:09:49.530 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:49.530 "md_size": 64, 00:09:49.530 "md_interleave": false, 00:09:49.530 "dif_type": 0, 00:09:49.530 "assigned_rate_limits": { 00:09:49.530 "rw_ios_per_sec": 0, 00:09:49.530 "rw_mbytes_per_sec": 0, 00:09:49.530 "r_mbytes_per_sec": 0, 00:09:49.530 "w_mbytes_per_sec": 0 00:09:49.530 }, 00:09:49.530 "claimed": false, 00:09:49.530 "zoned": false, 00:09:49.530 "supported_io_types": { 00:09:49.530 "read": true, 00:09:49.530 "write": true, 00:09:49.530 "unmap": true, 00:09:49.530 "flush": true, 00:09:49.530 "reset": true, 00:09:49.530 "nvme_admin": false, 00:09:49.530 "nvme_io": false, 00:09:49.530 "nvme_io_md": false, 00:09:49.530 "write_zeroes": true, 00:09:49.530 "zcopy": false, 00:09:49.530 "get_zone_info": false, 00:09:49.530 "zone_management": false, 00:09:49.530 "zone_append": false, 00:09:49.530 "compare": true, 00:09:49.530 "compare_and_write": false, 00:09:49.530 "abort": true, 00:09:49.530 "seek_hole": false, 00:09:49.530 "seek_data": false, 00:09:49.530 "copy": true, 00:09:49.530 "nvme_iov_md": false 00:09:49.530 }, 00:09:49.530 "driver_specific": { 00:09:49.530 "gpt": { 00:09:49.530 "base_bdev": "Nvme0n1", 00:09:49.530 "offset_blocks": 774400, 00:09:49.530 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:49.530 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:49.530 "partition_name": "SPDK_TEST_second" 00:09:49.530 } 00:09:49.530 } 00:09:49.530 } 00:09:49.530 ]' 00:09:49.530 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 81208 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 81208 ']' 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 81208 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81208 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.788 killing process with pid 81208 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81208' 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 81208 00:09:49.788 20:26:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 81208 00:09:50.354 00:09:50.354 real 0m2.273s 00:09:50.354 user 0m2.550s 00:09:50.354 sys 0m0.525s 00:09:50.354 20:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.354 20:26:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:50.354 ************************************ 00:09:50.354 END TEST bdev_gpt_uuid 00:09:50.354 ************************************ 00:09:50.354 20:26:44 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:50.354 20:26:44 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:50.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.869 Waiting for block devices as requested 00:09:50.869 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.869 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.127 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.127 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.422 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:56.422 20:26:50 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:09:56.422 20:26:50 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:09:56.422 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:56.422 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:56.422 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:56.422 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:09:56.422 20:26:50 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:56.422 00:09:56.422 real 0m54.088s 00:09:56.422 user 1m8.414s 00:09:56.422 sys 0m10.401s 00:09:56.422 20:26:50 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.422 20:26:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:56.422 ************************************ 00:09:56.422 END TEST blockdev_nvme_gpt 00:09:56.422 ************************************ 00:09:56.422 20:26:50 -- common/autotest_common.sh@1142 -- # return 0 00:09:56.422 20:26:50 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:56.422 20:26:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.422 20:26:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.422 20:26:50 -- common/autotest_common.sh@10 -- # set +x 00:09:56.422 ************************************ 00:09:56.422 START TEST nvme 00:09:56.422 ************************************ 00:09:56.422 20:26:50 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:56.680 * Looking for test storage... 00:09:56.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:56.680 20:26:50 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.551 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.810 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.810 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.810 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.810 20:26:51 nvme -- nvme/nvme.sh@79 -- # uname 00:09:57.810 20:26:51 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:57.810 20:26:51 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:57.810 20:26:51 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1069 -- # stubpid=81825 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:57.810 Waiting for stub to ready for secondary processes... 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/81825 ]] 00:09:57.810 20:26:51 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:57.810 [2024-07-12 20:26:51.941099] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:09:57.810 [2024-07-12 20:26:51.941332] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:59.185 20:26:52 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:59.185 20:26:52 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/81825 ]] 00:09:59.185 20:26:52 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:59.185 [2024-07-12 20:26:53.180591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.185 [2024-07-12 20:26:53.198499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.185 [2024-07-12 20:26:53.268650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.185 [2024-07-12 20:26:53.268661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.185 [2024-07-12 20:26:53.268734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.185 [2024-07-12 20:26:53.283131] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:59.185 [2024-07-12 20:26:53.283218] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.185 [2024-07-12 20:26:53.291986] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:59.185 [2024-07-12 20:26:53.292712] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:59.185 [2024-07-12 20:26:53.293537] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.185 [2024-07-12 20:26:53.293764] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:59.185 [2024-07-12 20:26:53.293823] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:59.185 [2024-07-12 20:26:53.294566] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.185 [2024-07-12 20:26:53.294751] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:59.185 [2024-07-12 20:26:53.294856] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:59.185 [2024-07-12 20:26:53.295765] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.185 [2024-07-12 20:26:53.295932] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:59.185 [2024-07-12 20:26:53.295998] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:59.185 [2024-07-12 20:26:53.296071] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:59.185 [2024-07-12 20:26:53.296125] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:59.752 20:26:53 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:59.752 done. 00:09:59.752 20:26:53 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:09:59.752 20:26:53 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:59.752 20:26:53 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:59.752 20:26:53 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.752 20:26:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.011 ************************************ 00:10:00.011 START TEST nvme_reset 00:10:00.011 ************************************ 00:10:00.011 20:26:53 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:00.270 Initializing NVMe Controllers 00:10:00.270 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:00.271 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:00.271 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:00.271 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:00.271 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:00.271 00:10:00.271 real 0m0.266s 00:10:00.271 user 0m0.083s 00:10:00.271 sys 0m0.131s 00:10:00.271 20:26:54 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.271 20:26:54 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:00.271 ************************************ 00:10:00.271 END TEST nvme_reset 00:10:00.271 ************************************ 00:10:00.271 20:26:54 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:00.271 20:26:54 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:00.271 20:26:54 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:00.271 20:26:54 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.271 20:26:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.271 ************************************ 00:10:00.271 START TEST nvme_identify 00:10:00.271 ************************************ 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:10:00.271 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:00.271 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:00.271 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:00.271 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:00.271 20:26:54 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:00.271 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:00.532 [2024-07-12 20:26:54.554224] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 81857 terminated unexpected 00:10:00.532 ===================================================== 00:10:00.532 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.532 ===================================================== 00:10:00.532 Controller Capabilities/Features 00:10:00.532 ================================ 00:10:00.532 Vendor ID: 1b36 00:10:00.532 Subsystem Vendor ID: 1af4 00:10:00.532 Serial Number: 12340 00:10:00.532 Model Number: QEMU NVMe Ctrl 00:10:00.532 Firmware Version: 8.0.0 00:10:00.532 Recommended Arb Burst: 6 00:10:00.532 IEEE OUI Identifier: 00 54 52 00:10:00.532 Multi-path I/O 00:10:00.532 May have multiple subsystem ports: No 00:10:00.532 May have multiple controllers: No 00:10:00.532 Associated with SR-IOV VF: No 00:10:00.532 Max Data Transfer Size: 524288 00:10:00.532 Max Number of Namespaces: 256 00:10:00.532 Max Number of I/O Queues: 64 00:10:00.532 NVMe Specification Version (VS): 1.4 00:10:00.532 NVMe Specification Version (Identify): 1.4 00:10:00.532 Maximum Queue Entries: 2048 00:10:00.532 Contiguous Queues Required: Yes 00:10:00.532 Arbitration Mechanisms Supported 00:10:00.532 Weighted Round Robin: Not Supported 00:10:00.532 Vendor Specific: Not Supported 00:10:00.532 Reset Timeout: 7500 ms 00:10:00.532 Doorbell Stride: 4 bytes 00:10:00.532 NVM Subsystem Reset: Not Supported 00:10:00.532 Command Sets Supported 00:10:00.532 NVM Command Set: Supported 00:10:00.532 Boot Partition: Not Supported 00:10:00.532 Memory Page Size Minimum: 4096 bytes 00:10:00.532 Memory Page Size Maximum: 65536 bytes 00:10:00.532 Persistent Memory Region: Not Supported 00:10:00.532 Optional Asynchronous Events Supported 00:10:00.532 Namespace Attribute Notices: Supported 00:10:00.532 Firmware Activation Notices: Not Supported 00:10:00.532 ANA Change Notices: Not Supported 00:10:00.532 PLE Aggregate Log Change Notices: Not Supported 00:10:00.532 LBA Status Info Alert Notices: Not Supported 00:10:00.532 EGE Aggregate Log Change Notices: Not Supported 00:10:00.532 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.532 Zone Descriptor Change Notices: Not Supported 00:10:00.532 Discovery Log Change Notices: Not Supported 00:10:00.532 Controller Attributes 00:10:00.532 128-bit Host Identifier: Not Supported 00:10:00.532 Non-Operational Permissive Mode: Not Supported 00:10:00.532 NVM Sets: Not Supported 00:10:00.532 Read Recovery Levels: Not Supported 00:10:00.532 Endurance Groups: Not Supported 00:10:00.532 Predictable Latency Mode: Not Supported 00:10:00.532 Traffic Based Keep ALive: Not Supported 00:10:00.532 Namespace Granularity: Not Supported 00:10:00.532 SQ Associations: Not Supported 00:10:00.532 UUID List: Not Supported 00:10:00.532 Multi-Domain Subsystem: Not Supported 00:10:00.532 Fixed Capacity Management: Not Supported 00:10:00.532 Variable Capacity Management: Not Supported 00:10:00.532 Delete Endurance Group: Not Supported 00:10:00.532 Delete NVM Set: Not Supported 00:10:00.532 Extended LBA Formats Supported: Supported 00:10:00.532 Flexible Data Placement Supported: Not Supported 00:10:00.532 00:10:00.532 Controller Memory Buffer Support 00:10:00.532 ================================ 00:10:00.533 Supported: No 00:10:00.533 00:10:00.533 Persistent Memory Region Support 00:10:00.533 ================================ 00:10:00.533 Supported: No 00:10:00.533 00:10:00.533 Admin Command Set Attributes 00:10:00.533 ============================ 00:10:00.533 Security Send/Receive: Not Supported 00:10:00.533 Format NVM: Supported 00:10:00.533 Firmware Activate/Download: Not Supported 00:10:00.533 Namespace Management: Supported 00:10:00.533 Device Self-Test: Not Supported 00:10:00.533 Directives: Supported 00:10:00.533 NVMe-MI: Not Supported 00:10:00.533 Virtualization Management: Not Supported 00:10:00.533 Doorbell Buffer Config: Supported 00:10:00.533 Get LBA Status Capability: Not Supported 00:10:00.533 Command & Feature Lockdown Capability: Not Supported 00:10:00.533 Abort Command Limit: 4 00:10:00.533 Async Event Request Limit: 4 00:10:00.533 Number of Firmware Slots: N/A 00:10:00.533 Firmware Slot 1 Read-Only: N/A 00:10:00.533 Firmware Activation Without Reset: N/A 00:10:00.533 Multiple Update Detection Support: N/A 00:10:00.533 Firmware Update Granularity: No Information Provided 00:10:00.533 Per-Namespace SMART Log: Yes 00:10:00.533 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.533 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:00.533 Command Effects Log Page: Supported 00:10:00.533 Get Log Page Extended Data: Supported 00:10:00.533 Telemetry Log Pages: Not Supported 00:10:00.533 Persistent Event Log Pages: Not Supported 00:10:00.533 Supported Log Pages Log Page: May Support 00:10:00.533 Commands Supported & Effects Log Page: Not Supported 00:10:00.533 Feature Identifiers & Effects Log Page:May Support 00:10:00.533 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.533 Data Area 4 for Telemetry Log: Not Supported 00:10:00.533 Error Log Page Entries Supported: 1 00:10:00.533 Keep Alive: Not Supported 00:10:00.533 00:10:00.533 NVM Command Set Attributes 00:10:00.533 ========================== 00:10:00.533 Submission Queue Entry Size 00:10:00.533 Max: 64 00:10:00.533 Min: 64 00:10:00.533 Completion Queue Entry Size 00:10:00.533 Max: 16 00:10:00.533 Min: 16 00:10:00.533 Number of Namespaces: 256 00:10:00.533 Compare Command: Supported 00:10:00.533 Write Uncorrectable Command: Not Supported 00:10:00.533 Dataset Management Command: Supported 00:10:00.533 Write Zeroes Command: Supported 00:10:00.533 Set Features Save Field: Supported 00:10:00.533 Reservations: Not Supported 00:10:00.533 Timestamp: Supported 00:10:00.533 Copy: Supported 00:10:00.533 Volatile Write Cache: Present 00:10:00.533 Atomic Write Unit (Normal): 1 00:10:00.533 Atomic Write Unit (PFail): 1 00:10:00.533 Atomic Compare & Write Unit: 1 00:10:00.533 Fused Compare & Write: Not Supported 00:10:00.533 Scatter-Gather List 00:10:00.533 SGL Command Set: Supported 00:10:00.533 SGL Keyed: Not Supported 00:10:00.533 SGL Bit Bucket Descriptor: Not Supported 00:10:00.533 SGL Metadata Pointer: Not Supported 00:10:00.533 Oversized SGL: Not Supported 00:10:00.533 SGL Metadata Address: Not Supported 00:10:00.533 SGL Offset: Not Supported 00:10:00.533 Transport SGL Data Block: Not Supported 00:10:00.533 Replay Protected Memory Block: Not Supported 00:10:00.533 00:10:00.533 Firmware Slot Information 00:10:00.533 ========================= 00:10:00.533 Active slot: 1 00:10:00.533 Slot 1 Firmware Revision: 1.0 00:10:00.533 00:10:00.533 00:10:00.533 Commands Supported and Effects 00:10:00.533 ============================== 00:10:00.533 Admin Commands 00:10:00.533 -------------- 00:10:00.533 Delete I/O Submission Queue (00h): Supported 00:10:00.533 Create I/O Submission Queue (01h): Supported 00:10:00.533 Get Log Page (02h): Supported 00:10:00.533 Delete I/O Completion Queue (04h): Supported 00:10:00.533 Create I/O Completion Queue (05h): Supported 00:10:00.533 Identify (06h): Supported 00:10:00.533 Abort (08h): Supported 00:10:00.533 Set Features (09h): Supported 00:10:00.533 Get Features (0Ah): Supported 00:10:00.533 Asynchronous Event Request (0Ch): Supported 00:10:00.533 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.533 Directive Send (19h): Supported 00:10:00.533 Directive Receive (1Ah): Supported 00:10:00.533 Virtualization Management (1Ch): Supported 00:10:00.533 Doorbell Buffer Config (7Ch): Supported 00:10:00.533 Format NVM (80h): Supported LBA-Change 00:10:00.533 I/O Commands 00:10:00.533 ------------ 00:10:00.533 Flush (00h): Supported LBA-Change 00:10:00.533 Write (01h): Supported LBA-Change 00:10:00.533 Read (02h): Supported 00:10:00.533 Compare (05h): Supported 00:10:00.533 Write Zeroes (08h): Supported LBA-Change 00:10:00.533 Dataset Management (09h): Supported LBA-Change 00:10:00.533 Unknown (0Ch): Supported 00:10:00.533 Unknown (12h): Supported 00:10:00.533 Copy (19h): Supported LBA-Change 00:10:00.533 Unknown (1Dh): Supported LBA-Change 00:10:00.533 00:10:00.533 Error Log 00:10:00.533 ========= 00:10:00.533 00:10:00.533 Arbitration 00:10:00.533 =========== 00:10:00.533 Arbitration Burst: no limit 00:10:00.533 00:10:00.533 Power Management 00:10:00.533 ================ 00:10:00.533 Number of Power States: 1 00:10:00.533 Current Power State: Power State #0 00:10:00.533 Power State #0: 00:10:00.533 Max Power: 25.00 W 00:10:00.533 Non-Operational State: Operational 00:10:00.533 Entry Latency: 16 microseconds 00:10:00.533 Exit Latency: 4 microseconds 00:10:00.533 Relative Read Throughput: 0 00:10:00.533 Relative Read Latency: 0 00:10:00.533 Relative Write Throughput: 0 00:10:00.533 Relative Write Latency: 0 00:10:00.533 Idle Power[2024-07-12 20:26:54.555945] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 81857 terminated unexpected 00:10:00.533 : Not Reported 00:10:00.533 Active Power: Not Reported 00:10:00.533 Non-Operational Permissive Mode: Not Supported 00:10:00.533 00:10:00.533 Health Information 00:10:00.533 ================== 00:10:00.533 Critical Warnings: 00:10:00.533 Available Spare Space: OK 00:10:00.533 Temperature: OK 00:10:00.533 Device Reliability: OK 00:10:00.533 Read Only: No 00:10:00.533 Volatile Memory Backup: OK 00:10:00.533 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.533 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.533 Available Spare: 0% 00:10:00.533 Available Spare Threshold: 0% 00:10:00.533 Life Percentage Used: 0% 00:10:00.533 Data Units Read: 1026 00:10:00.533 Data Units Written: 854 00:10:00.533 Host Read Commands: 47416 00:10:00.533 Host Write Commands: 45861 00:10:00.533 Controller Busy Time: 0 minutes 00:10:00.533 Power Cycles: 0 00:10:00.533 Power On Hours: 0 hours 00:10:00.533 Unsafe Shutdowns: 0 00:10:00.533 Unrecoverable Media Errors: 0 00:10:00.533 Lifetime Error Log Entries: 0 00:10:00.533 Warning Temperature Time: 0 minutes 00:10:00.533 Critical Temperature Time: 0 minutes 00:10:00.533 00:10:00.533 Number of Queues 00:10:00.533 ================ 00:10:00.533 Number of I/O Submission Queues: 64 00:10:00.533 Number of I/O Completion Queues: 64 00:10:00.533 00:10:00.533 ZNS Specific Controller Data 00:10:00.533 ============================ 00:10:00.533 Zone Append Size Limit: 0 00:10:00.533 00:10:00.533 00:10:00.533 Active Namespaces 00:10:00.533 ================= 00:10:00.533 Namespace ID:1 00:10:00.533 Error Recovery Timeout: Unlimited 00:10:00.533 Command Set Identifier: NVM (00h) 00:10:00.533 Deallocate: Supported 00:10:00.533 Deallocated/Unwritten Error: Supported 00:10:00.533 Deallocated Read Value: All 0x00 00:10:00.533 Deallocate in Write Zeroes: Not Supported 00:10:00.533 Deallocated Guard Field: 0xFFFF 00:10:00.533 Flush: Supported 00:10:00.533 Reservation: Not Supported 00:10:00.533 Metadata Transferred as: Separate Metadata Buffer 00:10:00.533 Namespace Sharing Capabilities: Private 00:10:00.533 Size (in LBAs): 1548666 (5GiB) 00:10:00.533 Capacity (in LBAs): 1548666 (5GiB) 00:10:00.533 Utilization (in LBAs): 1548666 (5GiB) 00:10:00.533 Thin Provisioning: Not Supported 00:10:00.533 Per-NS Atomic Units: No 00:10:00.533 Maximum Single Source Range Length: 128 00:10:00.533 Maximum Copy Length: 128 00:10:00.533 Maximum Source Range Count: 128 00:10:00.533 NGUID/EUI64 Never Reused: No 00:10:00.533 Namespace Write Protected: No 00:10:00.533 Number of LBA Formats: 8 00:10:00.533 Current LBA Format: LBA Format #07 00:10:00.533 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.533 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.533 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.533 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.533 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.533 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.533 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.533 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.533 00:10:00.533 NVM Specific Namespace Data 00:10:00.533 =========================== 00:10:00.533 Logical Block Storage Tag Mask: 0 00:10:00.533 Protection Information Capabilities: 00:10:00.533 16b Guard Protection Information Storage Tag Support: No 00:10:00.534 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.534 Storage Tag Check Read Support: No 00:10:00.534 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.534 ===================================================== 00:10:00.534 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:00.534 ===================================================== 00:10:00.534 Controller Capabilities/Features 00:10:00.534 ================================ 00:10:00.534 Vendor ID: 1b36 00:10:00.534 Subsystem Vendor ID: 1af4 00:10:00.534 Serial Number: 12341 00:10:00.534 Model Number: QEMU NVMe Ctrl 00:10:00.534 Firmware Version: 8.0.0 00:10:00.534 Recommended Arb Burst: 6 00:10:00.534 IEEE OUI Identifier: 00 54 52 00:10:00.534 Multi-path I/O 00:10:00.534 May have multiple subsystem ports: No 00:10:00.534 May have multiple controllers: No 00:10:00.534 Associated with SR-IOV VF: No 00:10:00.534 Max Data Transfer Size: 524288 00:10:00.534 Max Number of Namespaces: 256 00:10:00.534 Max Number of I/O Queues: 64 00:10:00.534 NVMe Specification Version (VS): 1.4 00:10:00.534 NVMe Specification Version (Identify): 1.4 00:10:00.534 Maximum Queue Entries: 2048 00:10:00.534 Contiguous Queues Required: Yes 00:10:00.534 Arbitration Mechanisms Supported 00:10:00.534 Weighted Round Robin: Not Supported 00:10:00.534 Vendor Specific: Not Supported 00:10:00.534 Reset Timeout: 7500 ms 00:10:00.534 Doorbell Stride: 4 bytes 00:10:00.534 NVM Subsystem Reset: Not Supported 00:10:00.534 Command Sets Supported 00:10:00.534 NVM Command Set: Supported 00:10:00.534 Boot Partition: Not Supported 00:10:00.534 Memory Page Size Minimum: 4096 bytes 00:10:00.534 Memory Page Size Maximum: 65536 bytes 00:10:00.534 Persistent Memory Region: Not Supported 00:10:00.534 Optional Asynchronous Events Supported 00:10:00.534 Namespace Attribute Notices: Supported 00:10:00.534 Firmware Activation Notices: Not Supported 00:10:00.534 ANA Change Notices: Not Supported 00:10:00.534 PLE Aggregate Log Change Notices: Not Supported 00:10:00.534 LBA Status Info Alert Notices: Not Supported 00:10:00.534 EGE Aggregate Log Change Notices: Not Supported 00:10:00.534 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.534 Zone Descriptor Change Notices: Not Supported 00:10:00.534 Discovery Log Change Notices: Not Supported 00:10:00.534 Controller Attributes 00:10:00.534 128-bit Host Identifier: Not Supported 00:10:00.534 Non-Operational Permissive Mode: Not Supported 00:10:00.534 NVM Sets: Not Supported 00:10:00.534 Read Recovery Levels: Not Supported 00:10:00.534 Endurance Groups: Not Supported 00:10:00.534 Predictable Latency Mode: Not Supported 00:10:00.534 Traffic Based Keep ALive: Not Supported 00:10:00.534 Namespace Granularity: Not Supported 00:10:00.534 SQ Associations: Not Supported 00:10:00.534 UUID List: Not Supported 00:10:00.534 Multi-Domain Subsystem: Not Supported 00:10:00.534 Fixed Capacity Management: Not Supported 00:10:00.534 Variable Capacity Management: Not Supported 00:10:00.534 Delete Endurance Group: Not Supported 00:10:00.534 Delete NVM Set: Not Supported 00:10:00.534 Extended LBA Formats Supported: Supported 00:10:00.534 Flexible Data Placement Supported: Not Supported 00:10:00.534 00:10:00.534 Controller Memory Buffer Support 00:10:00.534 ================================ 00:10:00.534 Supported: No 00:10:00.534 00:10:00.534 Persistent Memory Region Support 00:10:00.534 ================================ 00:10:00.534 Supported: No 00:10:00.534 00:10:00.534 Admin Command Set Attributes 00:10:00.534 ============================ 00:10:00.534 Security Send/Receive: Not Supported 00:10:00.534 Format NVM: Supported 00:10:00.534 Firmware Activate/Download: Not Supported 00:10:00.534 Namespace Management: Supported 00:10:00.534 Device Self-Test: Not Supported 00:10:00.534 Directives: Supported 00:10:00.534 NVMe-MI: Not Supported 00:10:00.534 Virtualization Management: Not Supported 00:10:00.534 Doorbell Buffer Config: Supported 00:10:00.534 Get LBA Status Capability: Not Supported 00:10:00.534 Command & Feature Lockdown Capability: Not Supported 00:10:00.534 Abort Command Limit: 4 00:10:00.534 Async Event Request Limit: 4 00:10:00.534 Number of Firmware Slots: N/A 00:10:00.534 Firmware Slot 1 Read-Only: N/A 00:10:00.534 Firmware Activation Without Reset: N/A 00:10:00.534 Multiple Update Detection Support: N/A 00:10:00.534 Firmware Update Granularity: No Information Provided 00:10:00.534 Per-Namespace SMART Log: Yes 00:10:00.534 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.534 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:00.534 Command Effects Log Page: Supported 00:10:00.534 Get Log Page Extended Data: Supported 00:10:00.534 Telemetry Log Pages: Not Supported 00:10:00.534 Persistent Event Log Pages: Not Supported 00:10:00.534 Supported Log Pages Log Page: May Support 00:10:00.534 Commands Supported & Effects Log Page: Not Supported 00:10:00.534 Feature Identifiers & Effects Log Page:May Support 00:10:00.534 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.534 Data Area 4 for Telemetry Log: Not Supported 00:10:00.534 Error Log Page Entries Supported: 1 00:10:00.534 Keep Alive: Not Supported 00:10:00.534 00:10:00.534 NVM Command Set Attributes 00:10:00.534 ========================== 00:10:00.534 Submission Queue Entry Size 00:10:00.534 Max: 64 00:10:00.534 Min: 64 00:10:00.534 Completion Queue Entry Size 00:10:00.534 Max: 16 00:10:00.534 Min: 16 00:10:00.534 Number of Namespaces: 256 00:10:00.534 Compare Command: Supported 00:10:00.534 Write Uncorrectable Command: Not Supported 00:10:00.534 Dataset Management Command: Supported 00:10:00.534 Write Zeroes Command: Supported 00:10:00.534 Set Features Save Field: Supported 00:10:00.534 Reservations: Not Supported 00:10:00.534 Timestamp: Supported 00:10:00.534 Copy: Supported 00:10:00.534 Volatile Write Cache: Present 00:10:00.534 Atomic Write Unit (Normal): 1 00:10:00.534 Atomic Write Unit (PFail): 1 00:10:00.534 Atomic Compare & Write Unit: 1 00:10:00.534 Fused Compare & Write: Not Supported 00:10:00.534 Scatter-Gather List 00:10:00.534 SGL Command Set: Supported 00:10:00.534 SGL Keyed: Not Supported 00:10:00.534 SGL Bit Bucket Descriptor: Not Supported 00:10:00.534 SGL Metadata Pointer: Not Supported 00:10:00.534 Oversized SGL: Not Supported 00:10:00.534 SGL Metadata Address: Not Supported 00:10:00.534 SGL Offset: Not Supported 00:10:00.534 Transport SGL Data Block: Not Supported 00:10:00.534 Replay Protected Memory Block: Not Supported 00:10:00.534 00:10:00.534 Firmware Slot Information 00:10:00.534 ========================= 00:10:00.534 Active slot: 1 00:10:00.534 Slot 1 Firmware Revision: 1.0 00:10:00.534 00:10:00.534 00:10:00.534 Commands Supported and Effects 00:10:00.534 ============================== 00:10:00.534 Admin Commands 00:10:00.534 -------------- 00:10:00.534 Delete I/O Submission Queue (00h): Supported 00:10:00.534 Create I/O Submission Queue (01h): Supported 00:10:00.534 Get Log Page (02h): Supported 00:10:00.534 Delete I/O Completion Queue (04h): Supported 00:10:00.534 Create I/O Completion Queue (05h): Supported 00:10:00.534 Identify (06h): Supported 00:10:00.534 Abort (08h): Supported 00:10:00.534 Set Features (09h): Supported 00:10:00.534 Get Features (0Ah): Supported 00:10:00.534 Asynchronous Event Request (0Ch): Supported 00:10:00.534 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.534 Directive Send (19h): Supported 00:10:00.534 Directive Receive (1Ah): Supported 00:10:00.534 Virtualization Management (1Ch): Supported 00:10:00.534 Doorbell Buffer Config (7Ch): Supported 00:10:00.534 Format NVM (80h): Supported LBA-Change 00:10:00.534 I/O Commands 00:10:00.534 ------------ 00:10:00.534 Flush (00h): Supported LBA-Change 00:10:00.534 Write (01h): Supported LBA-Change 00:10:00.534 Read (02h): Supported 00:10:00.534 Compare (05h): Supported 00:10:00.534 Write Zeroes (08h): Supported LBA-Change 00:10:00.534 Dataset Management (09h): Supported LBA-Change 00:10:00.534 Unknown (0Ch): Supported 00:10:00.534 Unknown (12h): Supported 00:10:00.534 Copy (19h): Supported LBA-Change 00:10:00.534 Unknown (1Dh): Supported LBA-Change 00:10:00.534 00:10:00.534 Error Log 00:10:00.534 ========= 00:10:00.534 00:10:00.534 Arbitration 00:10:00.534 =========== 00:10:00.534 Arbitration Burst: no limit 00:10:00.534 00:10:00.534 Power Management 00:10:00.534 ================ 00:10:00.534 Number of Power States: 1 00:10:00.534 Current Power State: Power State #0 00:10:00.534 Power State #0: 00:10:00.534 Max Power: 25.00 W 00:10:00.534 Non-Operational State: Operational 00:10:00.534 Entry Latency: 16 microseconds 00:10:00.534 Exit Latency: 4 microseconds 00:10:00.534 Relative Read Throughput: 0 00:10:00.534 Relative Read Latency: 0 00:10:00.534 Relative Write Throughput: 0 00:10:00.535 Relative Write Latency: 0 00:10:00.535 Idle Power: Not Reported 00:10:00.535 Active Power: Not Reported 00:10:00.535 Non-Operational Permissive Mode: Not Supported 00:10:00.535 00:10:00.535 Health Information 00:10:00.535 ================== 00:10:00.535 Critical Warnings: 00:10:00.535 Available Spare Space: OK 00:10:00.535 Temperature: [2024-07-12 20:26:54.557047] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 81857 terminated unexpected 00:10:00.535 OK 00:10:00.535 Device Reliability: OK 00:10:00.535 Read Only: No 00:10:00.535 Volatile Memory Backup: OK 00:10:00.535 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.535 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.535 Available Spare: 0% 00:10:00.535 Available Spare Threshold: 0% 00:10:00.535 Life Percentage Used: 0% 00:10:00.535 Data Units Read: 725 00:10:00.535 Data Units Written: 576 00:10:00.535 Host Read Commands: 33297 00:10:00.535 Host Write Commands: 31071 00:10:00.535 Controller Busy Time: 0 minutes 00:10:00.535 Power Cycles: 0 00:10:00.535 Power On Hours: 0 hours 00:10:00.535 Unsafe Shutdowns: 0 00:10:00.535 Unrecoverable Media Errors: 0 00:10:00.535 Lifetime Error Log Entries: 0 00:10:00.535 Warning Temperature Time: 0 minutes 00:10:00.535 Critical Temperature Time: 0 minutes 00:10:00.535 00:10:00.535 Number of Queues 00:10:00.535 ================ 00:10:00.535 Number of I/O Submission Queues: 64 00:10:00.535 Number of I/O Completion Queues: 64 00:10:00.535 00:10:00.535 ZNS Specific Controller Data 00:10:00.535 ============================ 00:10:00.535 Zone Append Size Limit: 0 00:10:00.535 00:10:00.535 00:10:00.535 Active Namespaces 00:10:00.535 ================= 00:10:00.535 Namespace ID:1 00:10:00.535 Error Recovery Timeout: Unlimited 00:10:00.535 Command Set Identifier: NVM (00h) 00:10:00.535 Deallocate: Supported 00:10:00.535 Deallocated/Unwritten Error: Supported 00:10:00.535 Deallocated Read Value: All 0x00 00:10:00.535 Deallocate in Write Zeroes: Not Supported 00:10:00.535 Deallocated Guard Field: 0xFFFF 00:10:00.535 Flush: Supported 00:10:00.535 Reservation: Not Supported 00:10:00.535 Namespace Sharing Capabilities: Private 00:10:00.535 Size (in LBAs): 1310720 (5GiB) 00:10:00.535 Capacity (in LBAs): 1310720 (5GiB) 00:10:00.535 Utilization (in LBAs): 1310720 (5GiB) 00:10:00.535 Thin Provisioning: Not Supported 00:10:00.535 Per-NS Atomic Units: No 00:10:00.535 Maximum Single Source Range Length: 128 00:10:00.535 Maximum Copy Length: 128 00:10:00.535 Maximum Source Range Count: 128 00:10:00.535 NGUID/EUI64 Never Reused: No 00:10:00.535 Namespace Write Protected: No 00:10:00.535 Number of LBA Formats: 8 00:10:00.535 Current LBA Format: LBA Format #04 00:10:00.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.535 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.535 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.535 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.535 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.535 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.535 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.535 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.535 00:10:00.535 NVM Specific Namespace Data 00:10:00.535 =========================== 00:10:00.535 Logical Block Storage Tag Mask: 0 00:10:00.535 Protection Information Capabilities: 00:10:00.535 16b Guard Protection Information Storage Tag Support: No 00:10:00.535 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.535 Storage Tag Check Read Support: No 00:10:00.535 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.535 ===================================================== 00:10:00.535 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:00.535 ===================================================== 00:10:00.535 Controller Capabilities/Features 00:10:00.535 ================================ 00:10:00.535 Vendor ID: 1b36 00:10:00.535 Subsystem Vendor ID: 1af4 00:10:00.535 Serial Number: 12343 00:10:00.535 Model Number: QEMU NVMe Ctrl 00:10:00.535 Firmware Version: 8.0.0 00:10:00.535 Recommended Arb Burst: 6 00:10:00.535 IEEE OUI Identifier: 00 54 52 00:10:00.535 Multi-path I/O 00:10:00.535 May have multiple subsystem ports: No 00:10:00.535 May have multiple controllers: Yes 00:10:00.535 Associated with SR-IOV VF: No 00:10:00.535 Max Data Transfer Size: 524288 00:10:00.535 Max Number of Namespaces: 256 00:10:00.535 Max Number of I/O Queues: 64 00:10:00.535 NVMe Specification Version (VS): 1.4 00:10:00.535 NVMe Specification Version (Identify): 1.4 00:10:00.535 Maximum Queue Entries: 2048 00:10:00.535 Contiguous Queues Required: Yes 00:10:00.535 Arbitration Mechanisms Supported 00:10:00.535 Weighted Round Robin: Not Supported 00:10:00.535 Vendor Specific: Not Supported 00:10:00.535 Reset Timeout: 7500 ms 00:10:00.535 Doorbell Stride: 4 bytes 00:10:00.535 NVM Subsystem Reset: Not Supported 00:10:00.535 Command Sets Supported 00:10:00.535 NVM Command Set: Supported 00:10:00.535 Boot Partition: Not Supported 00:10:00.535 Memory Page Size Minimum: 4096 bytes 00:10:00.535 Memory Page Size Maximum: 65536 bytes 00:10:00.535 Persistent Memory Region: Not Supported 00:10:00.535 Optional Asynchronous Events Supported 00:10:00.535 Namespace Attribute Notices: Supported 00:10:00.535 Firmware Activation Notices: Not Supported 00:10:00.535 ANA Change Notices: Not Supported 00:10:00.535 PLE Aggregate Log Change Notices: Not Supported 00:10:00.535 LBA Status Info Alert Notices: Not Supported 00:10:00.535 EGE Aggregate Log Change Notices: Not Supported 00:10:00.535 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.535 Zone Descriptor Change Notices: Not Supported 00:10:00.535 Discovery Log Change Notices: Not Supported 00:10:00.535 Controller Attributes 00:10:00.535 128-bit Host Identifier: Not Supported 00:10:00.535 Non-Operational Permissive Mode: Not Supported 00:10:00.535 NVM Sets: Not Supported 00:10:00.535 Read Recovery Levels: Not Supported 00:10:00.535 Endurance Groups: Supported 00:10:00.535 Predictable Latency Mode: Not Supported 00:10:00.535 Traffic Based Keep ALive: Not Supported 00:10:00.535 Namespace Granularity: Not Supported 00:10:00.535 SQ Associations: Not Supported 00:10:00.535 UUID List: Not Supported 00:10:00.535 Multi-Domain Subsystem: Not Supported 00:10:00.535 Fixed Capacity Management: Not Supported 00:10:00.535 Variable Capacity Management: Not Supported 00:10:00.535 Delete Endurance Group: Not Supported 00:10:00.535 Delete NVM Set: Not Supported 00:10:00.535 Extended LBA Formats Supported: Supported 00:10:00.535 Flexible Data Placement Supported: Supported 00:10:00.535 00:10:00.535 Controller Memory Buffer Support 00:10:00.535 ================================ 00:10:00.535 Supported: No 00:10:00.535 00:10:00.535 Persistent Memory Region Support 00:10:00.535 ================================ 00:10:00.535 Supported: No 00:10:00.535 00:10:00.535 Admin Command Set Attributes 00:10:00.535 ============================ 00:10:00.535 Security Send/Receive: Not Supported 00:10:00.535 Format NVM: Supported 00:10:00.535 Firmware Activate/Download: Not Supported 00:10:00.535 Namespace Management: Supported 00:10:00.535 Device Self-Test: Not Supported 00:10:00.535 Directives: Supported 00:10:00.535 NVMe-MI: Not Supported 00:10:00.535 Virtualization Management: Not Supported 00:10:00.535 Doorbell Buffer Config: Supported 00:10:00.535 Get LBA Status Capability: Not Supported 00:10:00.535 Command & Feature Lockdown Capability: Not Supported 00:10:00.535 Abort Command Limit: 4 00:10:00.535 Async Event Request Limit: 4 00:10:00.535 Number of Firmware Slots: N/A 00:10:00.535 Firmware Slot 1 Read-Only: N/A 00:10:00.535 Firmware Activation Without Reset: N/A 00:10:00.535 Multiple Update Detection Support: N/A 00:10:00.535 Firmware Update Granularity: No Information Provided 00:10:00.535 Per-Namespace SMART Log: Yes 00:10:00.535 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.535 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:00.535 Command Effects Log Page: Supported 00:10:00.535 Get Log Page Extended Data: Supported 00:10:00.535 Telemetry Log Pages: Not Supported 00:10:00.535 Persistent Event Log Pages: Not Supported 00:10:00.535 Supported Log Pages Log Page: May Support 00:10:00.535 Commands Supported & Effects Log Page: Not Supported 00:10:00.535 Feature Identifiers & Effects Log Page:May Support 00:10:00.535 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.535 Data Area 4 for Telemetry Log: Not Supported 00:10:00.535 Error Log Page Entries Supported: 1 00:10:00.535 Keep Alive: Not Supported 00:10:00.535 00:10:00.535 NVM Command Set Attributes 00:10:00.535 ========================== 00:10:00.535 Submission Queue Entry Size 00:10:00.536 Max: 64 00:10:00.536 Min: 64 00:10:00.536 Completion Queue Entry Size 00:10:00.536 Max: 16 00:10:00.536 Min: 16 00:10:00.536 Number of Namespaces: 256 00:10:00.536 Compare Command: Supported 00:10:00.536 Write Uncorrectable Command: Not Supported 00:10:00.536 Dataset Management Command: Supported 00:10:00.536 Write Zeroes Command: Supported 00:10:00.536 Set Features Save Field: Supported 00:10:00.536 Reservations: Not Supported 00:10:00.536 Timestamp: Supported 00:10:00.536 Copy: Supported 00:10:00.536 Volatile Write Cache: Present 00:10:00.536 Atomic Write Unit (Normal): 1 00:10:00.536 Atomic Write Unit (PFail): 1 00:10:00.536 Atomic Compare & Write Unit: 1 00:10:00.536 Fused Compare & Write: Not Supported 00:10:00.536 Scatter-Gather List 00:10:00.536 SGL Command Set: Supported 00:10:00.536 SGL Keyed: Not Supported 00:10:00.536 SGL Bit Bucket Descriptor: Not Supported 00:10:00.536 SGL Metadata Pointer: Not Supported 00:10:00.536 Oversized SGL: Not Supported 00:10:00.536 SGL Metadata Address: Not Supported 00:10:00.536 SGL Offset: Not Supported 00:10:00.536 Transport SGL Data Block: Not Supported 00:10:00.536 Replay Protected Memory Block: Not Supported 00:10:00.536 00:10:00.536 Firmware Slot Information 00:10:00.536 ========================= 00:10:00.536 Active slot: 1 00:10:00.536 Slot 1 Firmware Revision: 1.0 00:10:00.536 00:10:00.536 00:10:00.536 Commands Supported and Effects 00:10:00.536 ============================== 00:10:00.536 Admin Commands 00:10:00.536 -------------- 00:10:00.536 Delete I/O Submission Queue (00h): Supported 00:10:00.536 Create I/O Submission Queue (01h): Supported 00:10:00.536 Get Log Page (02h): Supported 00:10:00.536 Delete I/O Completion Queue (04h): Supported 00:10:00.536 Create I/O Completion Queue (05h): Supported 00:10:00.536 Identify (06h): Supported 00:10:00.536 Abort (08h): Supported 00:10:00.536 Set Features (09h): Supported 00:10:00.536 Get Features (0Ah): Supported 00:10:00.536 Asynchronous Event Request (0Ch): Supported 00:10:00.536 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.536 Directive Send (19h): Supported 00:10:00.536 Directive Receive (1Ah): Supported 00:10:00.536 Virtualization Management (1Ch): Supported 00:10:00.536 Doorbell Buffer Config (7Ch): Supported 00:10:00.536 Format NVM (80h): Supported LBA-Change 00:10:00.536 I/O Commands 00:10:00.536 ------------ 00:10:00.536 Flush (00h): Supported LBA-Change 00:10:00.536 Write (01h): Supported LBA-Change 00:10:00.536 Read (02h): Supported 00:10:00.536 Compare (05h): Supported 00:10:00.536 Write Zeroes (08h): Supported LBA-Change 00:10:00.536 Dataset Management (09h): Supported LBA-Change 00:10:00.536 Unknown (0Ch): Supported 00:10:00.536 Unknown (12h): Supported 00:10:00.536 Copy (19h): Supported LBA-Change 00:10:00.536 Unknown (1Dh): Supported LBA-Change 00:10:00.536 00:10:00.536 Error Log 00:10:00.536 ========= 00:10:00.536 00:10:00.536 Arbitration 00:10:00.536 =========== 00:10:00.536 Arbitration Burst: no limit 00:10:00.536 00:10:00.536 Power Management 00:10:00.536 ================ 00:10:00.536 Number of Power States: 1 00:10:00.536 Current Power State: Power State #0 00:10:00.536 Power State #0: 00:10:00.536 Max Power: 25.00 W 00:10:00.536 Non-Operational State: Operational 00:10:00.536 Entry Latency: 16 microseconds 00:10:00.536 Exit Latency: 4 microseconds 00:10:00.536 Relative Read Throughput: 0 00:10:00.536 Relative Read Latency: 0 00:10:00.536 Relative Write Throughput: 0 00:10:00.536 Relative Write Latency: 0 00:10:00.536 Idle Power: Not Reported 00:10:00.536 Active Power: Not Reported 00:10:00.536 Non-Operational Permissive Mode: Not Supported 00:10:00.536 00:10:00.536 Health Information 00:10:00.536 ================== 00:10:00.536 Critical Warnings: 00:10:00.536 Available Spare Space: OK 00:10:00.536 Temperature: OK 00:10:00.536 Device Reliability: OK 00:10:00.536 Read Only: No 00:10:00.536 Volatile Memory Backup: OK 00:10:00.536 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.536 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.536 Available Spare: 0% 00:10:00.536 Available Spare Threshold: 0% 00:10:00.536 Life Percentage Used: 0% 00:10:00.536 Data Units Read: 781 00:10:00.536 Data Units Written: 675 00:10:00.536 Host Read Commands: 33197 00:10:00.536 Host Write Commands: 31787 00:10:00.536 Controller Busy Time: 0 minutes 00:10:00.536 Power Cycles: 0 00:10:00.536 Power On Hours: 0 hours 00:10:00.536 Unsafe Shutdowns: 0 00:10:00.536 Unrecoverable Media Errors: 0 00:10:00.536 Lifetime Error Log Entries: 0 00:10:00.536 Warning Temperature Time: 0 minutes 00:10:00.536 Critical Temperature Time: 0 minutes 00:10:00.536 00:10:00.536 Number of Queues 00:10:00.536 ================ 00:10:00.536 Number of I/O Submission Queues: 64 00:10:00.536 Number of I/O Completion Queues: 64 00:10:00.536 00:10:00.536 ZNS Specific Controller Data 00:10:00.536 ============================ 00:10:00.536 Zone Append Size Limit: 0 00:10:00.536 00:10:00.536 00:10:00.536 Active Namespaces 00:10:00.536 ================= 00:10:00.536 Namespace ID:1 00:10:00.536 Error Recovery Timeout: Unlimited 00:10:00.536 Command Set Identifier: NVM (00h) 00:10:00.536 Deallocate: Supported 00:10:00.536 Deallocated/Unwritten Error: Supported 00:10:00.536 Deallocated Read Value: All 0x00 00:10:00.536 Deallocate in Write Zeroes: Not Supported 00:10:00.536 Deallocated Guard Field: 0xFFFF 00:10:00.536 Flush: Supported 00:10:00.536 Reservation: Not Supported 00:10:00.536 Namespace Sharing Capabilities: Multiple Controllers 00:10:00.536 Size (in LBAs): 262144 (1GiB) 00:10:00.536 Capacity (in LBAs): 262144 (1GiB) 00:10:00.536 Utilization (in LBAs): 262144 (1GiB) 00:10:00.536 Thin Provisioning: Not Supported 00:10:00.536 Per-NS Atomic Units: No 00:10:00.536 Maximum Single Source Range Length: 128 00:10:00.536 Maximum Copy Length: 128 00:10:00.536 Maximum Source Range Count: 128 00:10:00.536 NGUID/EUI64 Never Reused: No 00:10:00.536 Namespace Write Protected: No 00:10:00.536 Endurance group ID: 1 00:10:00.536 Number of LBA Formats: 8 00:10:00.536 Current LBA Format: LBA Format #04 00:10:00.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.536 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.536 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.536 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.536 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.536 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.536 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.536 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.536 00:10:00.536 Get Feature FDP: 00:10:00.536 ================ 00:10:00.536 Enabled: Yes 00:10:00.536 FDP configuration index: 0 00:10:00.536 00:10:00.536 FDP configurations log page 00:10:00.536 =========================== 00:10:00.536 Number of FDP configurations: 1 00:10:00.536 Version: 0 00:10:00.536 Size: 112 00:10:00.536 FDP Configuration Descriptor: 0 00:10:00.536 Descriptor Size: 96 00:10:00.536 Reclaim Group Identifier format: 2 00:10:00.536 FDP Volatile Write Cache: Not Present 00:10:00.536 FDP Configuration: Valid 00:10:00.536 Vendor Specific Size: 0 00:10:00.536 Number of Reclaim Groups: 2 00:10:00.536 Number of Recalim Unit Handles: 8 00:10:00.536 Max Placement Identifiers: 128 00:10:00.536 Number of Namespaces Suppprted: 256 00:10:00.536 Reclaim unit Nominal Size: 6000000 bytes 00:10:00.536 Estimated Reclaim Unit Time Limit: Not Reported 00:10:00.536 RUH Desc #000: RUH Type: Initially Isolated 00:10:00.536 RUH Desc #001: RUH Type: Initially Isolated 00:10:00.536 RUH Desc #002: RUH Type: Initially Isolated 00:10:00.536 RUH Desc #003: RUH Type: Initially Isolated 00:10:00.536 RUH Desc #004: RUH Type: Initially Isolated 00:10:00.536 RUH Desc #005: RUH Type: Initially Isolated 00:10:00.536 RUH Desc #006: RUH Type: Initially Isolated 00:10:00.536 RUH Desc #007: RUH Type: Initially Isolated 00:10:00.536 00:10:00.536 FDP reclaim unit handle usage log page 00:10:00.536 ====================================== 00:10:00.536 Number of Reclaim Unit Handles: 8 00:10:00.536 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:00.536 RUH Usage Desc #001: RUH Attributes: Unused 00:10:00.536 RUH Usage Desc #002: RUH Attributes: Unused 00:10:00.536 RUH Usage Desc #003: RUH Attributes: Unused 00:10:00.536 RUH Usage Desc #004: RUH Attributes: Unused 00:10:00.536 RUH Usage Desc #005: RUH Attributes: Unused 00:10:00.536 RUH Usage Desc #006: RUH Attributes: Unused 00:10:00.536 RUH Usage Desc #007: RUH Attributes: Unused 00:10:00.536 00:10:00.536 FDP statistics log page 00:10:00.536 ======================= 00:10:00.536 Host bytes with metadata written: 416849920 00:10:00.536 Medi[2024-07-12 20:26:54.558869] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 81857 terminated unexpected 00:10:00.536 a bytes with metadata written: 416894976 00:10:00.536 Media bytes erased: 0 00:10:00.536 00:10:00.536 FDP events log page 00:10:00.536 =================== 00:10:00.536 Number of FDP events: 0 00:10:00.536 00:10:00.536 NVM Specific Namespace Data 00:10:00.537 =========================== 00:10:00.537 Logical Block Storage Tag Mask: 0 00:10:00.537 Protection Information Capabilities: 00:10:00.537 16b Guard Protection Information Storage Tag Support: No 00:10:00.537 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.537 Storage Tag Check Read Support: No 00:10:00.537 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.537 ===================================================== 00:10:00.537 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:00.537 ===================================================== 00:10:00.537 Controller Capabilities/Features 00:10:00.537 ================================ 00:10:00.537 Vendor ID: 1b36 00:10:00.537 Subsystem Vendor ID: 1af4 00:10:00.537 Serial Number: 12342 00:10:00.537 Model Number: QEMU NVMe Ctrl 00:10:00.537 Firmware Version: 8.0.0 00:10:00.537 Recommended Arb Burst: 6 00:10:00.537 IEEE OUI Identifier: 00 54 52 00:10:00.537 Multi-path I/O 00:10:00.537 May have multiple subsystem ports: No 00:10:00.537 May have multiple controllers: No 00:10:00.537 Associated with SR-IOV VF: No 00:10:00.537 Max Data Transfer Size: 524288 00:10:00.537 Max Number of Namespaces: 256 00:10:00.537 Max Number of I/O Queues: 64 00:10:00.537 NVMe Specification Version (VS): 1.4 00:10:00.537 NVMe Specification Version (Identify): 1.4 00:10:00.537 Maximum Queue Entries: 2048 00:10:00.537 Contiguous Queues Required: Yes 00:10:00.537 Arbitration Mechanisms Supported 00:10:00.537 Weighted Round Robin: Not Supported 00:10:00.537 Vendor Specific: Not Supported 00:10:00.537 Reset Timeout: 7500 ms 00:10:00.537 Doorbell Stride: 4 bytes 00:10:00.537 NVM Subsystem Reset: Not Supported 00:10:00.537 Command Sets Supported 00:10:00.537 NVM Command Set: Supported 00:10:00.537 Boot Partition: Not Supported 00:10:00.537 Memory Page Size Minimum: 4096 bytes 00:10:00.537 Memory Page Size Maximum: 65536 bytes 00:10:00.537 Persistent Memory Region: Not Supported 00:10:00.537 Optional Asynchronous Events Supported 00:10:00.537 Namespace Attribute Notices: Supported 00:10:00.537 Firmware Activation Notices: Not Supported 00:10:00.537 ANA Change Notices: Not Supported 00:10:00.537 PLE Aggregate Log Change Notices: Not Supported 00:10:00.537 LBA Status Info Alert Notices: Not Supported 00:10:00.537 EGE Aggregate Log Change Notices: Not Supported 00:10:00.537 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.537 Zone Descriptor Change Notices: Not Supported 00:10:00.537 Discovery Log Change Notices: Not Supported 00:10:00.537 Controller Attributes 00:10:00.537 128-bit Host Identifier: Not Supported 00:10:00.537 Non-Operational Permissive Mode: Not Supported 00:10:00.537 NVM Sets: Not Supported 00:10:00.537 Read Recovery Levels: Not Supported 00:10:00.537 Endurance Groups: Not Supported 00:10:00.537 Predictable Latency Mode: Not Supported 00:10:00.537 Traffic Based Keep ALive: Not Supported 00:10:00.537 Namespace Granularity: Not Supported 00:10:00.537 SQ Associations: Not Supported 00:10:00.537 UUID List: Not Supported 00:10:00.537 Multi-Domain Subsystem: Not Supported 00:10:00.537 Fixed Capacity Management: Not Supported 00:10:00.537 Variable Capacity Management: Not Supported 00:10:00.537 Delete Endurance Group: Not Supported 00:10:00.537 Delete NVM Set: Not Supported 00:10:00.537 Extended LBA Formats Supported: Supported 00:10:00.537 Flexible Data Placement Supported: Not Supported 00:10:00.537 00:10:00.537 Controller Memory Buffer Support 00:10:00.537 ================================ 00:10:00.537 Supported: No 00:10:00.537 00:10:00.537 Persistent Memory Region Support 00:10:00.537 ================================ 00:10:00.537 Supported: No 00:10:00.537 00:10:00.537 Admin Command Set Attributes 00:10:00.537 ============================ 00:10:00.537 Security Send/Receive: Not Supported 00:10:00.537 Format NVM: Supported 00:10:00.537 Firmware Activate/Download: Not Supported 00:10:00.537 Namespace Management: Supported 00:10:00.537 Device Self-Test: Not Supported 00:10:00.537 Directives: Supported 00:10:00.537 NVMe-MI: Not Supported 00:10:00.537 Virtualization Management: Not Supported 00:10:00.537 Doorbell Buffer Config: Supported 00:10:00.537 Get LBA Status Capability: Not Supported 00:10:00.537 Command & Feature Lockdown Capability: Not Supported 00:10:00.537 Abort Command Limit: 4 00:10:00.537 Async Event Request Limit: 4 00:10:00.537 Number of Firmware Slots: N/A 00:10:00.537 Firmware Slot 1 Read-Only: N/A 00:10:00.537 Firmware Activation Without Reset: N/A 00:10:00.537 Multiple Update Detection Support: N/A 00:10:00.537 Firmware Update Granularity: No Information Provided 00:10:00.537 Per-Namespace SMART Log: Yes 00:10:00.537 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.537 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:00.537 Command Effects Log Page: Supported 00:10:00.537 Get Log Page Extended Data: Supported 00:10:00.537 Telemetry Log Pages: Not Supported 00:10:00.537 Persistent Event Log Pages: Not Supported 00:10:00.537 Supported Log Pages Log Page: May Support 00:10:00.537 Commands Supported & Effects Log Page: Not Supported 00:10:00.537 Feature Identifiers & Effects Log Page:May Support 00:10:00.537 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.537 Data Area 4 for Telemetry Log: Not Supported 00:10:00.537 Error Log Page Entries Supported: 1 00:10:00.537 Keep Alive: Not Supported 00:10:00.537 00:10:00.537 NVM Command Set Attributes 00:10:00.537 ========================== 00:10:00.537 Submission Queue Entry Size 00:10:00.537 Max: 64 00:10:00.537 Min: 64 00:10:00.537 Completion Queue Entry Size 00:10:00.537 Max: 16 00:10:00.537 Min: 16 00:10:00.537 Number of Namespaces: 256 00:10:00.537 Compare Command: Supported 00:10:00.537 Write Uncorrectable Command: Not Supported 00:10:00.537 Dataset Management Command: Supported 00:10:00.537 Write Zeroes Command: Supported 00:10:00.537 Set Features Save Field: Supported 00:10:00.537 Reservations: Not Supported 00:10:00.537 Timestamp: Supported 00:10:00.537 Copy: Supported 00:10:00.537 Volatile Write Cache: Present 00:10:00.537 Atomic Write Unit (Normal): 1 00:10:00.537 Atomic Write Unit (PFail): 1 00:10:00.537 Atomic Compare & Write Unit: 1 00:10:00.537 Fused Compare & Write: Not Supported 00:10:00.537 Scatter-Gather List 00:10:00.537 SGL Command Set: Supported 00:10:00.537 SGL Keyed: Not Supported 00:10:00.537 SGL Bit Bucket Descriptor: Not Supported 00:10:00.537 SGL Metadata Pointer: Not Supported 00:10:00.537 Oversized SGL: Not Supported 00:10:00.537 SGL Metadata Address: Not Supported 00:10:00.537 SGL Offset: Not Supported 00:10:00.537 Transport SGL Data Block: Not Supported 00:10:00.537 Replay Protected Memory Block: Not Supported 00:10:00.537 00:10:00.537 Firmware Slot Information 00:10:00.537 ========================= 00:10:00.537 Active slot: 1 00:10:00.538 Slot 1 Firmware Revision: 1.0 00:10:00.538 00:10:00.538 00:10:00.538 Commands Supported and Effects 00:10:00.538 ============================== 00:10:00.538 Admin Commands 00:10:00.538 -------------- 00:10:00.538 Delete I/O Submission Queue (00h): Supported 00:10:00.538 Create I/O Submission Queue (01h): Supported 00:10:00.538 Get Log Page (02h): Supported 00:10:00.538 Delete I/O Completion Queue (04h): Supported 00:10:00.538 Create I/O Completion Queue (05h): Supported 00:10:00.538 Identify (06h): Supported 00:10:00.538 Abort (08h): Supported 00:10:00.538 Set Features (09h): Supported 00:10:00.538 Get Features (0Ah): Supported 00:10:00.538 Asynchronous Event Request (0Ch): Supported 00:10:00.538 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.538 Directive Send (19h): Supported 00:10:00.538 Directive Receive (1Ah): Supported 00:10:00.538 Virtualization Management (1Ch): Supported 00:10:00.538 Doorbell Buffer Config (7Ch): Supported 00:10:00.538 Format NVM (80h): Supported LBA-Change 00:10:00.538 I/O Commands 00:10:00.538 ------------ 00:10:00.538 Flush (00h): Supported LBA-Change 00:10:00.538 Write (01h): Supported LBA-Change 00:10:00.538 Read (02h): Supported 00:10:00.538 Compare (05h): Supported 00:10:00.538 Write Zeroes (08h): Supported LBA-Change 00:10:00.538 Dataset Management (09h): Supported LBA-Change 00:10:00.538 Unknown (0Ch): Supported 00:10:00.538 Unknown (12h): Supported 00:10:00.538 Copy (19h): Supported LBA-Change 00:10:00.538 Unknown (1Dh): Supported LBA-Change 00:10:00.538 00:10:00.538 Error Log 00:10:00.538 ========= 00:10:00.538 00:10:00.538 Arbitration 00:10:00.538 =========== 00:10:00.538 Arbitration Burst: no limit 00:10:00.538 00:10:00.538 Power Management 00:10:00.538 ================ 00:10:00.538 Number of Power States: 1 00:10:00.538 Current Power State: Power State #0 00:10:00.538 Power State #0: 00:10:00.538 Max Power: 25.00 W 00:10:00.538 Non-Operational State: Operational 00:10:00.538 Entry Latency: 16 microseconds 00:10:00.538 Exit Latency: 4 microseconds 00:10:00.538 Relative Read Throughput: 0 00:10:00.538 Relative Read Latency: 0 00:10:00.538 Relative Write Throughput: 0 00:10:00.538 Relative Write Latency: 0 00:10:00.538 Idle Power: Not Reported 00:10:00.538 Active Power: Not Reported 00:10:00.538 Non-Operational Permissive Mode: Not Supported 00:10:00.538 00:10:00.538 Health Information 00:10:00.538 ================== 00:10:00.538 Critical Warnings: 00:10:00.538 Available Spare Space: OK 00:10:00.538 Temperature: OK 00:10:00.538 Device Reliability: OK 00:10:00.538 Read Only: No 00:10:00.538 Volatile Memory Backup: OK 00:10:00.538 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.538 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.538 Available Spare: 0% 00:10:00.538 Available Spare Threshold: 0% 00:10:00.538 Life Percentage Used: 0% 00:10:00.538 Data Units Read: 2109 00:10:00.538 Data Units Written: 1790 00:10:00.538 Host Read Commands: 97763 00:10:00.538 Host Write Commands: 93533 00:10:00.538 Controller Busy Time: 0 minutes 00:10:00.538 Power Cycles: 0 00:10:00.538 Power On Hours: 0 hours 00:10:00.538 Unsafe Shutdowns: 0 00:10:00.538 Unrecoverable Media Errors: 0 00:10:00.538 Lifetime Error Log Entries: 0 00:10:00.538 Warning Temperature Time: 0 minutes 00:10:00.538 Critical Temperature Time: 0 minutes 00:10:00.538 00:10:00.538 Number of Queues 00:10:00.538 ================ 00:10:00.538 Number of I/O Submission Queues: 64 00:10:00.538 Number of I/O Completion Queues: 64 00:10:00.538 00:10:00.538 ZNS Specific Controller Data 00:10:00.538 ============================ 00:10:00.538 Zone Append Size Limit: 0 00:10:00.538 00:10:00.538 00:10:00.538 Active Namespaces 00:10:00.538 ================= 00:10:00.538 Namespace ID:1 00:10:00.538 Error Recovery Timeout: Unlimited 00:10:00.538 Command Set Identifier: NVM (00h) 00:10:00.538 Deallocate: Supported 00:10:00.538 Deallocated/Unwritten Error: Supported 00:10:00.538 Deallocated Read Value: All 0x00 00:10:00.538 Deallocate in Write Zeroes: Not Supported 00:10:00.538 Deallocated Guard Field: 0xFFFF 00:10:00.538 Flush: Supported 00:10:00.538 Reservation: Not Supported 00:10:00.538 Namespace Sharing Capabilities: Private 00:10:00.538 Size (in LBAs): 1048576 (4GiB) 00:10:00.538 Capacity (in LBAs): 1048576 (4GiB) 00:10:00.538 Utilization (in LBAs): 1048576 (4GiB) 00:10:00.538 Thin Provisioning: Not Supported 00:10:00.538 Per-NS Atomic Units: No 00:10:00.538 Maximum Single Source Range Length: 128 00:10:00.538 Maximum Copy Length: 128 00:10:00.538 Maximum Source Range Count: 128 00:10:00.538 NGUID/EUI64 Never Reused: No 00:10:00.538 Namespace Write Protected: No 00:10:00.538 Number of LBA Formats: 8 00:10:00.538 Current LBA Format: LBA Format #04 00:10:00.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.538 00:10:00.538 NVM Specific Namespace Data 00:10:00.538 =========================== 00:10:00.538 Logical Block Storage Tag Mask: 0 00:10:00.538 Protection Information Capabilities: 00:10:00.538 16b Guard Protection Information Storage Tag Support: No 00:10:00.538 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.538 Storage Tag Check Read Support: No 00:10:00.538 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Namespace ID:2 00:10:00.538 Error Recovery Timeout: Unlimited 00:10:00.538 Command Set Identifier: NVM (00h) 00:10:00.538 Deallocate: Supported 00:10:00.538 Deallocated/Unwritten Error: Supported 00:10:00.538 Deallocated Read Value: All 0x00 00:10:00.538 Deallocate in Write Zeroes: Not Supported 00:10:00.538 Deallocated Guard Field: 0xFFFF 00:10:00.538 Flush: Supported 00:10:00.538 Reservation: Not Supported 00:10:00.538 Namespace Sharing Capabilities: Private 00:10:00.538 Size (in LBAs): 1048576 (4GiB) 00:10:00.538 Capacity (in LBAs): 1048576 (4GiB) 00:10:00.538 Utilization (in LBAs): 1048576 (4GiB) 00:10:00.538 Thin Provisioning: Not Supported 00:10:00.538 Per-NS Atomic Units: No 00:10:00.538 Maximum Single Source Range Length: 128 00:10:00.538 Maximum Copy Length: 128 00:10:00.538 Maximum Source Range Count: 128 00:10:00.538 NGUID/EUI64 Never Reused: No 00:10:00.538 Namespace Write Protected: No 00:10:00.538 Number of LBA Formats: 8 00:10:00.538 Current LBA Format: LBA Format #04 00:10:00.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.538 00:10:00.538 NVM Specific Namespace Data 00:10:00.538 =========================== 00:10:00.538 Logical Block Storage Tag Mask: 0 00:10:00.538 Protection Information Capabilities: 00:10:00.538 16b Guard Protection Information Storage Tag Support: No 00:10:00.538 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.538 Storage Tag Check Read Support: No 00:10:00.538 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.538 Namespace ID:3 00:10:00.538 Error Recovery Timeout: Unlimited 00:10:00.538 Command Set Identifier: NVM (00h) 00:10:00.538 Deallocate: Supported 00:10:00.538 Deallocated/Unwritten Error: Supported 00:10:00.538 Deallocated Read Value: All 0x00 00:10:00.538 Deallocate in Write Zeroes: Not Supported 00:10:00.538 Deallocated Guard Field: 0xFFFF 00:10:00.538 Flush: Supported 00:10:00.538 Reservation: Not Supported 00:10:00.538 Namespace Sharing Capabilities: Private 00:10:00.539 Size (in LBAs): 1048576 (4GiB) 00:10:00.539 Capacity (in LBAs): 1048576 (4GiB) 00:10:00.539 Utilization (in LBAs): 1048576 (4GiB) 00:10:00.539 Thin Provisioning: Not Supported 00:10:00.539 Per-NS Atomic Units: No 00:10:00.539 Maximum Single Source Range Length: 128 00:10:00.539 Maximum Copy Length: 128 00:10:00.539 Maximum Source Range Count: 128 00:10:00.539 NGUID/EUI64 Never Reused: No 00:10:00.539 Namespace Write Protected: No 00:10:00.539 Number of LBA Formats: 8 00:10:00.539 Current LBA Format: LBA Format #04 00:10:00.539 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.539 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.539 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.539 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.539 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.539 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.539 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.539 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.539 00:10:00.539 NVM Specific Namespace Data 00:10:00.539 =========================== 00:10:00.539 Logical Block Storage Tag Mask: 0 00:10:00.539 Protection Information Capabilities: 00:10:00.539 16b Guard Protection Information Storage Tag Support: No 00:10:00.539 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.539 Storage Tag Check Read Support: No 00:10:00.539 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.539 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:00.539 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:00.797 ===================================================== 00:10:00.797 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.797 ===================================================== 00:10:00.797 Controller Capabilities/Features 00:10:00.797 ================================ 00:10:00.797 Vendor ID: 1b36 00:10:00.797 Subsystem Vendor ID: 1af4 00:10:00.797 Serial Number: 12340 00:10:00.797 Model Number: QEMU NVMe Ctrl 00:10:00.797 Firmware Version: 8.0.0 00:10:00.797 Recommended Arb Burst: 6 00:10:00.797 IEEE OUI Identifier: 00 54 52 00:10:00.797 Multi-path I/O 00:10:00.797 May have multiple subsystem ports: No 00:10:00.797 May have multiple controllers: No 00:10:00.797 Associated with SR-IOV VF: No 00:10:00.797 Max Data Transfer Size: 524288 00:10:00.797 Max Number of Namespaces: 256 00:10:00.797 Max Number of I/O Queues: 64 00:10:00.797 NVMe Specification Version (VS): 1.4 00:10:00.797 NVMe Specification Version (Identify): 1.4 00:10:00.797 Maximum Queue Entries: 2048 00:10:00.797 Contiguous Queues Required: Yes 00:10:00.797 Arbitration Mechanisms Supported 00:10:00.797 Weighted Round Robin: Not Supported 00:10:00.797 Vendor Specific: Not Supported 00:10:00.797 Reset Timeout: 7500 ms 00:10:00.797 Doorbell Stride: 4 bytes 00:10:00.797 NVM Subsystem Reset: Not Supported 00:10:00.797 Command Sets Supported 00:10:00.797 NVM Command Set: Supported 00:10:00.797 Boot Partition: Not Supported 00:10:00.797 Memory Page Size Minimum: 4096 bytes 00:10:00.797 Memory Page Size Maximum: 65536 bytes 00:10:00.797 Persistent Memory Region: Not Supported 00:10:00.797 Optional Asynchronous Events Supported 00:10:00.797 Namespace Attribute Notices: Supported 00:10:00.797 Firmware Activation Notices: Not Supported 00:10:00.797 ANA Change Notices: Not Supported 00:10:00.797 PLE Aggregate Log Change Notices: Not Supported 00:10:00.797 LBA Status Info Alert Notices: Not Supported 00:10:00.797 EGE Aggregate Log Change Notices: Not Supported 00:10:00.798 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.798 Zone Descriptor Change Notices: Not Supported 00:10:00.798 Discovery Log Change Notices: Not Supported 00:10:00.798 Controller Attributes 00:10:00.798 128-bit Host Identifier: Not Supported 00:10:00.798 Non-Operational Permissive Mode: Not Supported 00:10:00.798 NVM Sets: Not Supported 00:10:00.798 Read Recovery Levels: Not Supported 00:10:00.798 Endurance Groups: Not Supported 00:10:00.798 Predictable Latency Mode: Not Supported 00:10:00.798 Traffic Based Keep ALive: Not Supported 00:10:00.798 Namespace Granularity: Not Supported 00:10:00.798 SQ Associations: Not Supported 00:10:00.798 UUID List: Not Supported 00:10:00.798 Multi-Domain Subsystem: Not Supported 00:10:00.798 Fixed Capacity Management: Not Supported 00:10:00.798 Variable Capacity Management: Not Supported 00:10:00.798 Delete Endurance Group: Not Supported 00:10:00.798 Delete NVM Set: Not Supported 00:10:00.798 Extended LBA Formats Supported: Supported 00:10:00.798 Flexible Data Placement Supported: Not Supported 00:10:00.798 00:10:00.798 Controller Memory Buffer Support 00:10:00.798 ================================ 00:10:00.798 Supported: No 00:10:00.798 00:10:00.798 Persistent Memory Region Support 00:10:00.798 ================================ 00:10:00.798 Supported: No 00:10:00.798 00:10:00.798 Admin Command Set Attributes 00:10:00.798 ============================ 00:10:00.798 Security Send/Receive: Not Supported 00:10:00.798 Format NVM: Supported 00:10:00.798 Firmware Activate/Download: Not Supported 00:10:00.798 Namespace Management: Supported 00:10:00.798 Device Self-Test: Not Supported 00:10:00.798 Directives: Supported 00:10:00.798 NVMe-MI: Not Supported 00:10:00.798 Virtualization Management: Not Supported 00:10:00.798 Doorbell Buffer Config: Supported 00:10:00.798 Get LBA Status Capability: Not Supported 00:10:00.798 Command & Feature Lockdown Capability: Not Supported 00:10:00.798 Abort Command Limit: 4 00:10:00.798 Async Event Request Limit: 4 00:10:00.798 Number of Firmware Slots: N/A 00:10:00.798 Firmware Slot 1 Read-Only: N/A 00:10:00.798 Firmware Activation Without Reset: N/A 00:10:00.798 Multiple Update Detection Support: N/A 00:10:00.798 Firmware Update Granularity: No Information Provided 00:10:00.798 Per-Namespace SMART Log: Yes 00:10:00.798 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.798 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:00.798 Command Effects Log Page: Supported 00:10:00.798 Get Log Page Extended Data: Supported 00:10:00.798 Telemetry Log Pages: Not Supported 00:10:00.798 Persistent Event Log Pages: Not Supported 00:10:00.798 Supported Log Pages Log Page: May Support 00:10:00.798 Commands Supported & Effects Log Page: Not Supported 00:10:00.798 Feature Identifiers & Effects Log Page:May Support 00:10:00.798 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.798 Data Area 4 for Telemetry Log: Not Supported 00:10:00.798 Error Log Page Entries Supported: 1 00:10:00.798 Keep Alive: Not Supported 00:10:00.798 00:10:00.798 NVM Command Set Attributes 00:10:00.798 ========================== 00:10:00.798 Submission Queue Entry Size 00:10:00.798 Max: 64 00:10:00.798 Min: 64 00:10:00.798 Completion Queue Entry Size 00:10:00.798 Max: 16 00:10:00.798 Min: 16 00:10:00.798 Number of Namespaces: 256 00:10:00.798 Compare Command: Supported 00:10:00.798 Write Uncorrectable Command: Not Supported 00:10:00.798 Dataset Management Command: Supported 00:10:00.798 Write Zeroes Command: Supported 00:10:00.798 Set Features Save Field: Supported 00:10:00.798 Reservations: Not Supported 00:10:00.798 Timestamp: Supported 00:10:00.798 Copy: Supported 00:10:00.798 Volatile Write Cache: Present 00:10:00.798 Atomic Write Unit (Normal): 1 00:10:00.798 Atomic Write Unit (PFail): 1 00:10:00.798 Atomic Compare & Write Unit: 1 00:10:00.798 Fused Compare & Write: Not Supported 00:10:00.798 Scatter-Gather List 00:10:00.798 SGL Command Set: Supported 00:10:00.798 SGL Keyed: Not Supported 00:10:00.798 SGL Bit Bucket Descriptor: Not Supported 00:10:00.798 SGL Metadata Pointer: Not Supported 00:10:00.798 Oversized SGL: Not Supported 00:10:00.798 SGL Metadata Address: Not Supported 00:10:00.798 SGL Offset: Not Supported 00:10:00.798 Transport SGL Data Block: Not Supported 00:10:00.798 Replay Protected Memory Block: Not Supported 00:10:00.798 00:10:00.798 Firmware Slot Information 00:10:00.798 ========================= 00:10:00.798 Active slot: 1 00:10:00.798 Slot 1 Firmware Revision: 1.0 00:10:00.798 00:10:00.798 00:10:00.798 Commands Supported and Effects 00:10:00.798 ============================== 00:10:00.798 Admin Commands 00:10:00.798 -------------- 00:10:00.798 Delete I/O Submission Queue (00h): Supported 00:10:00.798 Create I/O Submission Queue (01h): Supported 00:10:00.798 Get Log Page (02h): Supported 00:10:00.798 Delete I/O Completion Queue (04h): Supported 00:10:00.798 Create I/O Completion Queue (05h): Supported 00:10:00.798 Identify (06h): Supported 00:10:00.798 Abort (08h): Supported 00:10:00.798 Set Features (09h): Supported 00:10:00.798 Get Features (0Ah): Supported 00:10:00.798 Asynchronous Event Request (0Ch): Supported 00:10:00.798 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.798 Directive Send (19h): Supported 00:10:00.798 Directive Receive (1Ah): Supported 00:10:00.798 Virtualization Management (1Ch): Supported 00:10:00.798 Doorbell Buffer Config (7Ch): Supported 00:10:00.798 Format NVM (80h): Supported LBA-Change 00:10:00.798 I/O Commands 00:10:00.798 ------------ 00:10:00.798 Flush (00h): Supported LBA-Change 00:10:00.798 Write (01h): Supported LBA-Change 00:10:00.798 Read (02h): Supported 00:10:00.798 Compare (05h): Supported 00:10:00.798 Write Zeroes (08h): Supported LBA-Change 00:10:00.798 Dataset Management (09h): Supported LBA-Change 00:10:00.798 Unknown (0Ch): Supported 00:10:00.798 Unknown (12h): Supported 00:10:00.798 Copy (19h): Supported LBA-Change 00:10:00.798 Unknown (1Dh): Supported LBA-Change 00:10:00.798 00:10:00.798 Error Log 00:10:00.798 ========= 00:10:00.798 00:10:00.798 Arbitration 00:10:00.798 =========== 00:10:00.798 Arbitration Burst: no limit 00:10:00.798 00:10:00.798 Power Management 00:10:00.798 ================ 00:10:00.798 Number of Power States: 1 00:10:00.798 Current Power State: Power State #0 00:10:00.798 Power State #0: 00:10:00.798 Max Power: 25.00 W 00:10:00.798 Non-Operational State: Operational 00:10:00.798 Entry Latency: 16 microseconds 00:10:00.798 Exit Latency: 4 microseconds 00:10:00.798 Relative Read Throughput: 0 00:10:00.798 Relative Read Latency: 0 00:10:00.798 Relative Write Throughput: 0 00:10:00.798 Relative Write Latency: 0 00:10:00.798 Idle Power: Not Reported 00:10:00.798 Active Power: Not Reported 00:10:00.798 Non-Operational Permissive Mode: Not Supported 00:10:00.798 00:10:00.798 Health Information 00:10:00.798 ================== 00:10:00.798 Critical Warnings: 00:10:00.798 Available Spare Space: OK 00:10:00.798 Temperature: OK 00:10:00.798 Device Reliability: OK 00:10:00.798 Read Only: No 00:10:00.798 Volatile Memory Backup: OK 00:10:00.798 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.798 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.798 Available Spare: 0% 00:10:00.798 Available Spare Threshold: 0% 00:10:00.798 Life Percentage Used: 0% 00:10:00.798 Data Units Read: 1026 00:10:00.798 Data Units Written: 854 00:10:00.798 Host Read Commands: 47416 00:10:00.798 Host Write Commands: 45861 00:10:00.798 Controller Busy Time: 0 minutes 00:10:00.798 Power Cycles: 0 00:10:00.798 Power On Hours: 0 hours 00:10:00.798 Unsafe Shutdowns: 0 00:10:00.798 Unrecoverable Media Errors: 0 00:10:00.798 Lifetime Error Log Entries: 0 00:10:00.798 Warning Temperature Time: 0 minutes 00:10:00.798 Critical Temperature Time: 0 minutes 00:10:00.798 00:10:00.798 Number of Queues 00:10:00.798 ================ 00:10:00.798 Number of I/O Submission Queues: 64 00:10:00.798 Number of I/O Completion Queues: 64 00:10:00.798 00:10:00.799 ZNS Specific Controller Data 00:10:00.799 ============================ 00:10:00.799 Zone Append Size Limit: 0 00:10:00.799 00:10:00.799 00:10:00.799 Active Namespaces 00:10:00.799 ================= 00:10:00.799 Namespace ID:1 00:10:00.799 Error Recovery Timeout: Unlimited 00:10:00.799 Command Set Identifier: NVM (00h) 00:10:00.799 Deallocate: Supported 00:10:00.799 Deallocated/Unwritten Error: Supported 00:10:00.799 Deallocated Read Value: All 0x00 00:10:00.799 Deallocate in Write Zeroes: Not Supported 00:10:00.799 Deallocated Guard Field: 0xFFFF 00:10:00.799 Flush: Supported 00:10:00.799 Reservation: Not Supported 00:10:00.799 Metadata Transferred as: Separate Metadata Buffer 00:10:00.799 Namespace Sharing Capabilities: Private 00:10:00.799 Size (in LBAs): 1548666 (5GiB) 00:10:00.799 Capacity (in LBAs): 1548666 (5GiB) 00:10:00.799 Utilization (in LBAs): 1548666 (5GiB) 00:10:00.799 Thin Provisioning: Not Supported 00:10:00.799 Per-NS Atomic Units: No 00:10:00.799 Maximum Single Source Range Length: 128 00:10:00.799 Maximum Copy Length: 128 00:10:00.799 Maximum Source Range Count: 128 00:10:00.799 NGUID/EUI64 Never Reused: No 00:10:00.799 Namespace Write Protected: No 00:10:00.799 Number of LBA Formats: 8 00:10:00.799 Current LBA Format: LBA Format #07 00:10:00.799 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.799 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.799 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.799 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.799 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.799 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.799 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.799 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.799 00:10:00.799 NVM Specific Namespace Data 00:10:00.799 =========================== 00:10:00.799 Logical Block Storage Tag Mask: 0 00:10:00.799 Protection Information Capabilities: 00:10:00.799 16b Guard Protection Information Storage Tag Support: No 00:10:00.799 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.799 Storage Tag Check Read Support: No 00:10:00.799 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.799 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:00.799 20:26:54 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:01.058 ===================================================== 00:10:01.058 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:01.058 ===================================================== 00:10:01.058 Controller Capabilities/Features 00:10:01.058 ================================ 00:10:01.058 Vendor ID: 1b36 00:10:01.058 Subsystem Vendor ID: 1af4 00:10:01.058 Serial Number: 12341 00:10:01.058 Model Number: QEMU NVMe Ctrl 00:10:01.058 Firmware Version: 8.0.0 00:10:01.058 Recommended Arb Burst: 6 00:10:01.058 IEEE OUI Identifier: 00 54 52 00:10:01.058 Multi-path I/O 00:10:01.058 May have multiple subsystem ports: No 00:10:01.058 May have multiple controllers: No 00:10:01.058 Associated with SR-IOV VF: No 00:10:01.058 Max Data Transfer Size: 524288 00:10:01.058 Max Number of Namespaces: 256 00:10:01.058 Max Number of I/O Queues: 64 00:10:01.058 NVMe Specification Version (VS): 1.4 00:10:01.058 NVMe Specification Version (Identify): 1.4 00:10:01.058 Maximum Queue Entries: 2048 00:10:01.058 Contiguous Queues Required: Yes 00:10:01.058 Arbitration Mechanisms Supported 00:10:01.058 Weighted Round Robin: Not Supported 00:10:01.058 Vendor Specific: Not Supported 00:10:01.058 Reset Timeout: 7500 ms 00:10:01.058 Doorbell Stride: 4 bytes 00:10:01.058 NVM Subsystem Reset: Not Supported 00:10:01.058 Command Sets Supported 00:10:01.058 NVM Command Set: Supported 00:10:01.058 Boot Partition: Not Supported 00:10:01.058 Memory Page Size Minimum: 4096 bytes 00:10:01.058 Memory Page Size Maximum: 65536 bytes 00:10:01.058 Persistent Memory Region: Not Supported 00:10:01.058 Optional Asynchronous Events Supported 00:10:01.058 Namespace Attribute Notices: Supported 00:10:01.058 Firmware Activation Notices: Not Supported 00:10:01.058 ANA Change Notices: Not Supported 00:10:01.058 PLE Aggregate Log Change Notices: Not Supported 00:10:01.058 LBA Status Info Alert Notices: Not Supported 00:10:01.058 EGE Aggregate Log Change Notices: Not Supported 00:10:01.058 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.058 Zone Descriptor Change Notices: Not Supported 00:10:01.058 Discovery Log Change Notices: Not Supported 00:10:01.058 Controller Attributes 00:10:01.058 128-bit Host Identifier: Not Supported 00:10:01.058 Non-Operational Permissive Mode: Not Supported 00:10:01.058 NVM Sets: Not Supported 00:10:01.058 Read Recovery Levels: Not Supported 00:10:01.058 Endurance Groups: Not Supported 00:10:01.058 Predictable Latency Mode: Not Supported 00:10:01.058 Traffic Based Keep ALive: Not Supported 00:10:01.058 Namespace Granularity: Not Supported 00:10:01.058 SQ Associations: Not Supported 00:10:01.058 UUID List: Not Supported 00:10:01.058 Multi-Domain Subsystem: Not Supported 00:10:01.058 Fixed Capacity Management: Not Supported 00:10:01.058 Variable Capacity Management: Not Supported 00:10:01.058 Delete Endurance Group: Not Supported 00:10:01.058 Delete NVM Set: Not Supported 00:10:01.058 Extended LBA Formats Supported: Supported 00:10:01.058 Flexible Data Placement Supported: Not Supported 00:10:01.058 00:10:01.058 Controller Memory Buffer Support 00:10:01.058 ================================ 00:10:01.058 Supported: No 00:10:01.058 00:10:01.058 Persistent Memory Region Support 00:10:01.058 ================================ 00:10:01.058 Supported: No 00:10:01.058 00:10:01.058 Admin Command Set Attributes 00:10:01.058 ============================ 00:10:01.058 Security Send/Receive: Not Supported 00:10:01.058 Format NVM: Supported 00:10:01.058 Firmware Activate/Download: Not Supported 00:10:01.058 Namespace Management: Supported 00:10:01.058 Device Self-Test: Not Supported 00:10:01.058 Directives: Supported 00:10:01.058 NVMe-MI: Not Supported 00:10:01.058 Virtualization Management: Not Supported 00:10:01.058 Doorbell Buffer Config: Supported 00:10:01.058 Get LBA Status Capability: Not Supported 00:10:01.058 Command & Feature Lockdown Capability: Not Supported 00:10:01.058 Abort Command Limit: 4 00:10:01.058 Async Event Request Limit: 4 00:10:01.058 Number of Firmware Slots: N/A 00:10:01.058 Firmware Slot 1 Read-Only: N/A 00:10:01.058 Firmware Activation Without Reset: N/A 00:10:01.058 Multiple Update Detection Support: N/A 00:10:01.058 Firmware Update Granularity: No Information Provided 00:10:01.058 Per-Namespace SMART Log: Yes 00:10:01.058 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.058 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:01.058 Command Effects Log Page: Supported 00:10:01.058 Get Log Page Extended Data: Supported 00:10:01.058 Telemetry Log Pages: Not Supported 00:10:01.058 Persistent Event Log Pages: Not Supported 00:10:01.058 Supported Log Pages Log Page: May Support 00:10:01.058 Commands Supported & Effects Log Page: Not Supported 00:10:01.058 Feature Identifiers & Effects Log Page:May Support 00:10:01.058 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.058 Data Area 4 for Telemetry Log: Not Supported 00:10:01.058 Error Log Page Entries Supported: 1 00:10:01.058 Keep Alive: Not Supported 00:10:01.058 00:10:01.058 NVM Command Set Attributes 00:10:01.058 ========================== 00:10:01.058 Submission Queue Entry Size 00:10:01.058 Max: 64 00:10:01.058 Min: 64 00:10:01.058 Completion Queue Entry Size 00:10:01.058 Max: 16 00:10:01.058 Min: 16 00:10:01.058 Number of Namespaces: 256 00:10:01.058 Compare Command: Supported 00:10:01.058 Write Uncorrectable Command: Not Supported 00:10:01.058 Dataset Management Command: Supported 00:10:01.058 Write Zeroes Command: Supported 00:10:01.058 Set Features Save Field: Supported 00:10:01.058 Reservations: Not Supported 00:10:01.058 Timestamp: Supported 00:10:01.058 Copy: Supported 00:10:01.058 Volatile Write Cache: Present 00:10:01.058 Atomic Write Unit (Normal): 1 00:10:01.058 Atomic Write Unit (PFail): 1 00:10:01.058 Atomic Compare & Write Unit: 1 00:10:01.058 Fused Compare & Write: Not Supported 00:10:01.058 Scatter-Gather List 00:10:01.058 SGL Command Set: Supported 00:10:01.058 SGL Keyed: Not Supported 00:10:01.058 SGL Bit Bucket Descriptor: Not Supported 00:10:01.058 SGL Metadata Pointer: Not Supported 00:10:01.058 Oversized SGL: Not Supported 00:10:01.058 SGL Metadata Address: Not Supported 00:10:01.058 SGL Offset: Not Supported 00:10:01.058 Transport SGL Data Block: Not Supported 00:10:01.058 Replay Protected Memory Block: Not Supported 00:10:01.058 00:10:01.058 Firmware Slot Information 00:10:01.058 ========================= 00:10:01.058 Active slot: 1 00:10:01.058 Slot 1 Firmware Revision: 1.0 00:10:01.058 00:10:01.058 00:10:01.058 Commands Supported and Effects 00:10:01.058 ============================== 00:10:01.058 Admin Commands 00:10:01.058 -------------- 00:10:01.058 Delete I/O Submission Queue (00h): Supported 00:10:01.058 Create I/O Submission Queue (01h): Supported 00:10:01.058 Get Log Page (02h): Supported 00:10:01.058 Delete I/O Completion Queue (04h): Supported 00:10:01.058 Create I/O Completion Queue (05h): Supported 00:10:01.058 Identify (06h): Supported 00:10:01.058 Abort (08h): Supported 00:10:01.058 Set Features (09h): Supported 00:10:01.058 Get Features (0Ah): Supported 00:10:01.058 Asynchronous Event Request (0Ch): Supported 00:10:01.058 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.058 Directive Send (19h): Supported 00:10:01.058 Directive Receive (1Ah): Supported 00:10:01.058 Virtualization Management (1Ch): Supported 00:10:01.058 Doorbell Buffer Config (7Ch): Supported 00:10:01.058 Format NVM (80h): Supported LBA-Change 00:10:01.058 I/O Commands 00:10:01.058 ------------ 00:10:01.058 Flush (00h): Supported LBA-Change 00:10:01.058 Write (01h): Supported LBA-Change 00:10:01.058 Read (02h): Supported 00:10:01.059 Compare (05h): Supported 00:10:01.059 Write Zeroes (08h): Supported LBA-Change 00:10:01.059 Dataset Management (09h): Supported LBA-Change 00:10:01.059 Unknown (0Ch): Supported 00:10:01.059 Unknown (12h): Supported 00:10:01.059 Copy (19h): Supported LBA-Change 00:10:01.059 Unknown (1Dh): Supported LBA-Change 00:10:01.059 00:10:01.059 Error Log 00:10:01.059 ========= 00:10:01.059 00:10:01.059 Arbitration 00:10:01.059 =========== 00:10:01.059 Arbitration Burst: no limit 00:10:01.059 00:10:01.059 Power Management 00:10:01.059 ================ 00:10:01.059 Number of Power States: 1 00:10:01.059 Current Power State: Power State #0 00:10:01.059 Power State #0: 00:10:01.059 Max Power: 25.00 W 00:10:01.059 Non-Operational State: Operational 00:10:01.059 Entry Latency: 16 microseconds 00:10:01.059 Exit Latency: 4 microseconds 00:10:01.059 Relative Read Throughput: 0 00:10:01.059 Relative Read Latency: 0 00:10:01.059 Relative Write Throughput: 0 00:10:01.059 Relative Write Latency: 0 00:10:01.059 Idle Power: Not Reported 00:10:01.059 Active Power: Not Reported 00:10:01.059 Non-Operational Permissive Mode: Not Supported 00:10:01.059 00:10:01.059 Health Information 00:10:01.059 ================== 00:10:01.059 Critical Warnings: 00:10:01.059 Available Spare Space: OK 00:10:01.059 Temperature: OK 00:10:01.059 Device Reliability: OK 00:10:01.059 Read Only: No 00:10:01.059 Volatile Memory Backup: OK 00:10:01.059 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.059 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.059 Available Spare: 0% 00:10:01.059 Available Spare Threshold: 0% 00:10:01.059 Life Percentage Used: 0% 00:10:01.059 Data Units Read: 725 00:10:01.059 Data Units Written: 576 00:10:01.059 Host Read Commands: 33297 00:10:01.059 Host Write Commands: 31071 00:10:01.059 Controller Busy Time: 0 minutes 00:10:01.059 Power Cycles: 0 00:10:01.059 Power On Hours: 0 hours 00:10:01.059 Unsafe Shutdowns: 0 00:10:01.059 Unrecoverable Media Errors: 0 00:10:01.059 Lifetime Error Log Entries: 0 00:10:01.059 Warning Temperature Time: 0 minutes 00:10:01.059 Critical Temperature Time: 0 minutes 00:10:01.059 00:10:01.059 Number of Queues 00:10:01.059 ================ 00:10:01.059 Number of I/O Submission Queues: 64 00:10:01.059 Number of I/O Completion Queues: 64 00:10:01.059 00:10:01.059 ZNS Specific Controller Data 00:10:01.059 ============================ 00:10:01.059 Zone Append Size Limit: 0 00:10:01.059 00:10:01.059 00:10:01.059 Active Namespaces 00:10:01.059 ================= 00:10:01.059 Namespace ID:1 00:10:01.059 Error Recovery Timeout: Unlimited 00:10:01.059 Command Set Identifier: NVM (00h) 00:10:01.059 Deallocate: Supported 00:10:01.059 Deallocated/Unwritten Error: Supported 00:10:01.059 Deallocated Read Value: All 0x00 00:10:01.059 Deallocate in Write Zeroes: Not Supported 00:10:01.059 Deallocated Guard Field: 0xFFFF 00:10:01.059 Flush: Supported 00:10:01.059 Reservation: Not Supported 00:10:01.059 Namespace Sharing Capabilities: Private 00:10:01.059 Size (in LBAs): 1310720 (5GiB) 00:10:01.059 Capacity (in LBAs): 1310720 (5GiB) 00:10:01.059 Utilization (in LBAs): 1310720 (5GiB) 00:10:01.059 Thin Provisioning: Not Supported 00:10:01.059 Per-NS Atomic Units: No 00:10:01.059 Maximum Single Source Range Length: 128 00:10:01.059 Maximum Copy Length: 128 00:10:01.059 Maximum Source Range Count: 128 00:10:01.059 NGUID/EUI64 Never Reused: No 00:10:01.059 Namespace Write Protected: No 00:10:01.059 Number of LBA Formats: 8 00:10:01.059 Current LBA Format: LBA Format #04 00:10:01.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.059 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.059 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.059 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.059 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.059 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.059 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.059 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.059 00:10:01.059 NVM Specific Namespace Data 00:10:01.059 =========================== 00:10:01.059 Logical Block Storage Tag Mask: 0 00:10:01.059 Protection Information Capabilities: 00:10:01.059 16b Guard Protection Information Storage Tag Support: No 00:10:01.059 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.059 Storage Tag Check Read Support: No 00:10:01.059 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.059 20:26:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:01.059 20:26:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:01.387 ===================================================== 00:10:01.387 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:01.387 ===================================================== 00:10:01.387 Controller Capabilities/Features 00:10:01.387 ================================ 00:10:01.387 Vendor ID: 1b36 00:10:01.387 Subsystem Vendor ID: 1af4 00:10:01.387 Serial Number: 12342 00:10:01.387 Model Number: QEMU NVMe Ctrl 00:10:01.387 Firmware Version: 8.0.0 00:10:01.387 Recommended Arb Burst: 6 00:10:01.387 IEEE OUI Identifier: 00 54 52 00:10:01.387 Multi-path I/O 00:10:01.387 May have multiple subsystem ports: No 00:10:01.387 May have multiple controllers: No 00:10:01.387 Associated with SR-IOV VF: No 00:10:01.387 Max Data Transfer Size: 524288 00:10:01.387 Max Number of Namespaces: 256 00:10:01.387 Max Number of I/O Queues: 64 00:10:01.387 NVMe Specification Version (VS): 1.4 00:10:01.387 NVMe Specification Version (Identify): 1.4 00:10:01.387 Maximum Queue Entries: 2048 00:10:01.387 Contiguous Queues Required: Yes 00:10:01.387 Arbitration Mechanisms Supported 00:10:01.387 Weighted Round Robin: Not Supported 00:10:01.387 Vendor Specific: Not Supported 00:10:01.387 Reset Timeout: 7500 ms 00:10:01.387 Doorbell Stride: 4 bytes 00:10:01.387 NVM Subsystem Reset: Not Supported 00:10:01.387 Command Sets Supported 00:10:01.387 NVM Command Set: Supported 00:10:01.387 Boot Partition: Not Supported 00:10:01.387 Memory Page Size Minimum: 4096 bytes 00:10:01.387 Memory Page Size Maximum: 65536 bytes 00:10:01.387 Persistent Memory Region: Not Supported 00:10:01.387 Optional Asynchronous Events Supported 00:10:01.387 Namespace Attribute Notices: Supported 00:10:01.387 Firmware Activation Notices: Not Supported 00:10:01.387 ANA Change Notices: Not Supported 00:10:01.387 PLE Aggregate Log Change Notices: Not Supported 00:10:01.387 LBA Status Info Alert Notices: Not Supported 00:10:01.387 EGE Aggregate Log Change Notices: Not Supported 00:10:01.387 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.388 Zone Descriptor Change Notices: Not Supported 00:10:01.388 Discovery Log Change Notices: Not Supported 00:10:01.388 Controller Attributes 00:10:01.388 128-bit Host Identifier: Not Supported 00:10:01.388 Non-Operational Permissive Mode: Not Supported 00:10:01.388 NVM Sets: Not Supported 00:10:01.388 Read Recovery Levels: Not Supported 00:10:01.388 Endurance Groups: Not Supported 00:10:01.388 Predictable Latency Mode: Not Supported 00:10:01.388 Traffic Based Keep ALive: Not Supported 00:10:01.388 Namespace Granularity: Not Supported 00:10:01.388 SQ Associations: Not Supported 00:10:01.388 UUID List: Not Supported 00:10:01.388 Multi-Domain Subsystem: Not Supported 00:10:01.388 Fixed Capacity Management: Not Supported 00:10:01.388 Variable Capacity Management: Not Supported 00:10:01.388 Delete Endurance Group: Not Supported 00:10:01.388 Delete NVM Set: Not Supported 00:10:01.388 Extended LBA Formats Supported: Supported 00:10:01.388 Flexible Data Placement Supported: Not Supported 00:10:01.388 00:10:01.388 Controller Memory Buffer Support 00:10:01.388 ================================ 00:10:01.388 Supported: No 00:10:01.388 00:10:01.388 Persistent Memory Region Support 00:10:01.388 ================================ 00:10:01.388 Supported: No 00:10:01.388 00:10:01.388 Admin Command Set Attributes 00:10:01.388 ============================ 00:10:01.388 Security Send/Receive: Not Supported 00:10:01.388 Format NVM: Supported 00:10:01.388 Firmware Activate/Download: Not Supported 00:10:01.388 Namespace Management: Supported 00:10:01.388 Device Self-Test: Not Supported 00:10:01.388 Directives: Supported 00:10:01.388 NVMe-MI: Not Supported 00:10:01.388 Virtualization Management: Not Supported 00:10:01.388 Doorbell Buffer Config: Supported 00:10:01.388 Get LBA Status Capability: Not Supported 00:10:01.388 Command & Feature Lockdown Capability: Not Supported 00:10:01.388 Abort Command Limit: 4 00:10:01.388 Async Event Request Limit: 4 00:10:01.388 Number of Firmware Slots: N/A 00:10:01.388 Firmware Slot 1 Read-Only: N/A 00:10:01.388 Firmware Activation Without Reset: N/A 00:10:01.388 Multiple Update Detection Support: N/A 00:10:01.388 Firmware Update Granularity: No Information Provided 00:10:01.388 Per-Namespace SMART Log: Yes 00:10:01.388 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.388 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:01.388 Command Effects Log Page: Supported 00:10:01.388 Get Log Page Extended Data: Supported 00:10:01.388 Telemetry Log Pages: Not Supported 00:10:01.388 Persistent Event Log Pages: Not Supported 00:10:01.388 Supported Log Pages Log Page: May Support 00:10:01.388 Commands Supported & Effects Log Page: Not Supported 00:10:01.388 Feature Identifiers & Effects Log Page:May Support 00:10:01.388 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.388 Data Area 4 for Telemetry Log: Not Supported 00:10:01.388 Error Log Page Entries Supported: 1 00:10:01.388 Keep Alive: Not Supported 00:10:01.388 00:10:01.388 NVM Command Set Attributes 00:10:01.388 ========================== 00:10:01.388 Submission Queue Entry Size 00:10:01.388 Max: 64 00:10:01.388 Min: 64 00:10:01.388 Completion Queue Entry Size 00:10:01.388 Max: 16 00:10:01.388 Min: 16 00:10:01.388 Number of Namespaces: 256 00:10:01.388 Compare Command: Supported 00:10:01.388 Write Uncorrectable Command: Not Supported 00:10:01.388 Dataset Management Command: Supported 00:10:01.388 Write Zeroes Command: Supported 00:10:01.388 Set Features Save Field: Supported 00:10:01.388 Reservations: Not Supported 00:10:01.388 Timestamp: Supported 00:10:01.388 Copy: Supported 00:10:01.388 Volatile Write Cache: Present 00:10:01.388 Atomic Write Unit (Normal): 1 00:10:01.388 Atomic Write Unit (PFail): 1 00:10:01.388 Atomic Compare & Write Unit: 1 00:10:01.388 Fused Compare & Write: Not Supported 00:10:01.388 Scatter-Gather List 00:10:01.388 SGL Command Set: Supported 00:10:01.388 SGL Keyed: Not Supported 00:10:01.388 SGL Bit Bucket Descriptor: Not Supported 00:10:01.388 SGL Metadata Pointer: Not Supported 00:10:01.388 Oversized SGL: Not Supported 00:10:01.388 SGL Metadata Address: Not Supported 00:10:01.388 SGL Offset: Not Supported 00:10:01.388 Transport SGL Data Block: Not Supported 00:10:01.388 Replay Protected Memory Block: Not Supported 00:10:01.388 00:10:01.388 Firmware Slot Information 00:10:01.388 ========================= 00:10:01.388 Active slot: 1 00:10:01.388 Slot 1 Firmware Revision: 1.0 00:10:01.388 00:10:01.388 00:10:01.388 Commands Supported and Effects 00:10:01.388 ============================== 00:10:01.388 Admin Commands 00:10:01.388 -------------- 00:10:01.388 Delete I/O Submission Queue (00h): Supported 00:10:01.388 Create I/O Submission Queue (01h): Supported 00:10:01.388 Get Log Page (02h): Supported 00:10:01.388 Delete I/O Completion Queue (04h): Supported 00:10:01.388 Create I/O Completion Queue (05h): Supported 00:10:01.388 Identify (06h): Supported 00:10:01.388 Abort (08h): Supported 00:10:01.388 Set Features (09h): Supported 00:10:01.388 Get Features (0Ah): Supported 00:10:01.388 Asynchronous Event Request (0Ch): Supported 00:10:01.388 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.388 Directive Send (19h): Supported 00:10:01.388 Directive Receive (1Ah): Supported 00:10:01.388 Virtualization Management (1Ch): Supported 00:10:01.388 Doorbell Buffer Config (7Ch): Supported 00:10:01.388 Format NVM (80h): Supported LBA-Change 00:10:01.388 I/O Commands 00:10:01.388 ------------ 00:10:01.388 Flush (00h): Supported LBA-Change 00:10:01.388 Write (01h): Supported LBA-Change 00:10:01.388 Read (02h): Supported 00:10:01.388 Compare (05h): Supported 00:10:01.388 Write Zeroes (08h): Supported LBA-Change 00:10:01.388 Dataset Management (09h): Supported LBA-Change 00:10:01.388 Unknown (0Ch): Supported 00:10:01.388 Unknown (12h): Supported 00:10:01.388 Copy (19h): Supported LBA-Change 00:10:01.388 Unknown (1Dh): Supported LBA-Change 00:10:01.388 00:10:01.388 Error Log 00:10:01.388 ========= 00:10:01.388 00:10:01.388 Arbitration 00:10:01.388 =========== 00:10:01.388 Arbitration Burst: no limit 00:10:01.388 00:10:01.388 Power Management 00:10:01.388 ================ 00:10:01.388 Number of Power States: 1 00:10:01.388 Current Power State: Power State #0 00:10:01.388 Power State #0: 00:10:01.388 Max Power: 25.00 W 00:10:01.388 Non-Operational State: Operational 00:10:01.388 Entry Latency: 16 microseconds 00:10:01.388 Exit Latency: 4 microseconds 00:10:01.388 Relative Read Throughput: 0 00:10:01.388 Relative Read Latency: 0 00:10:01.388 Relative Write Throughput: 0 00:10:01.388 Relative Write Latency: 0 00:10:01.388 Idle Power: Not Reported 00:10:01.388 Active Power: Not Reported 00:10:01.388 Non-Operational Permissive Mode: Not Supported 00:10:01.388 00:10:01.388 Health Information 00:10:01.388 ================== 00:10:01.388 Critical Warnings: 00:10:01.388 Available Spare Space: OK 00:10:01.388 Temperature: OK 00:10:01.388 Device Reliability: OK 00:10:01.388 Read Only: No 00:10:01.388 Volatile Memory Backup: OK 00:10:01.388 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.388 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.388 Available Spare: 0% 00:10:01.388 Available Spare Threshold: 0% 00:10:01.388 Life Percentage Used: 0% 00:10:01.388 Data Units Read: 2109 00:10:01.388 Data Units Written: 1790 00:10:01.388 Host Read Commands: 97763 00:10:01.388 Host Write Commands: 93533 00:10:01.388 Controller Busy Time: 0 minutes 00:10:01.388 Power Cycles: 0 00:10:01.388 Power On Hours: 0 hours 00:10:01.388 Unsafe Shutdowns: 0 00:10:01.388 Unrecoverable Media Errors: 0 00:10:01.388 Lifetime Error Log Entries: 0 00:10:01.388 Warning Temperature Time: 0 minutes 00:10:01.388 Critical Temperature Time: 0 minutes 00:10:01.388 00:10:01.388 Number of Queues 00:10:01.388 ================ 00:10:01.388 Number of I/O Submission Queues: 64 00:10:01.388 Number of I/O Completion Queues: 64 00:10:01.388 00:10:01.388 ZNS Specific Controller Data 00:10:01.388 ============================ 00:10:01.388 Zone Append Size Limit: 0 00:10:01.388 00:10:01.388 00:10:01.388 Active Namespaces 00:10:01.388 ================= 00:10:01.388 Namespace ID:1 00:10:01.388 Error Recovery Timeout: Unlimited 00:10:01.388 Command Set Identifier: NVM (00h) 00:10:01.388 Deallocate: Supported 00:10:01.388 Deallocated/Unwritten Error: Supported 00:10:01.388 Deallocated Read Value: All 0x00 00:10:01.388 Deallocate in Write Zeroes: Not Supported 00:10:01.388 Deallocated Guard Field: 0xFFFF 00:10:01.388 Flush: Supported 00:10:01.388 Reservation: Not Supported 00:10:01.388 Namespace Sharing Capabilities: Private 00:10:01.388 Size (in LBAs): 1048576 (4GiB) 00:10:01.388 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.388 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.388 Thin Provisioning: Not Supported 00:10:01.388 Per-NS Atomic Units: No 00:10:01.388 Maximum Single Source Range Length: 128 00:10:01.388 Maximum Copy Length: 128 00:10:01.388 Maximum Source Range Count: 128 00:10:01.388 NGUID/EUI64 Never Reused: No 00:10:01.388 Namespace Write Protected: No 00:10:01.389 Number of LBA Formats: 8 00:10:01.389 Current LBA Format: LBA Format #04 00:10:01.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.389 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.389 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.389 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.389 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.389 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.389 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.389 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.389 00:10:01.389 NVM Specific Namespace Data 00:10:01.389 =========================== 00:10:01.389 Logical Block Storage Tag Mask: 0 00:10:01.389 Protection Information Capabilities: 00:10:01.389 16b Guard Protection Information Storage Tag Support: No 00:10:01.389 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.389 Storage Tag Check Read Support: No 00:10:01.389 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Namespace ID:2 00:10:01.389 Error Recovery Timeout: Unlimited 00:10:01.389 Command Set Identifier: NVM (00h) 00:10:01.389 Deallocate: Supported 00:10:01.389 Deallocated/Unwritten Error: Supported 00:10:01.389 Deallocated Read Value: All 0x00 00:10:01.389 Deallocate in Write Zeroes: Not Supported 00:10:01.389 Deallocated Guard Field: 0xFFFF 00:10:01.389 Flush: Supported 00:10:01.389 Reservation: Not Supported 00:10:01.389 Namespace Sharing Capabilities: Private 00:10:01.389 Size (in LBAs): 1048576 (4GiB) 00:10:01.389 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.389 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.389 Thin Provisioning: Not Supported 00:10:01.389 Per-NS Atomic Units: No 00:10:01.389 Maximum Single Source Range Length: 128 00:10:01.389 Maximum Copy Length: 128 00:10:01.389 Maximum Source Range Count: 128 00:10:01.389 NGUID/EUI64 Never Reused: No 00:10:01.389 Namespace Write Protected: No 00:10:01.389 Number of LBA Formats: 8 00:10:01.389 Current LBA Format: LBA Format #04 00:10:01.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.389 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.389 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.389 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.389 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.389 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.389 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.389 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.389 00:10:01.389 NVM Specific Namespace Data 00:10:01.389 =========================== 00:10:01.389 Logical Block Storage Tag Mask: 0 00:10:01.389 Protection Information Capabilities: 00:10:01.389 16b Guard Protection Information Storage Tag Support: No 00:10:01.389 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.389 Storage Tag Check Read Support: No 00:10:01.389 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Namespace ID:3 00:10:01.389 Error Recovery Timeout: Unlimited 00:10:01.389 Command Set Identifier: NVM (00h) 00:10:01.389 Deallocate: Supported 00:10:01.389 Deallocated/Unwritten Error: Supported 00:10:01.389 Deallocated Read Value: All 0x00 00:10:01.389 Deallocate in Write Zeroes: Not Supported 00:10:01.389 Deallocated Guard Field: 0xFFFF 00:10:01.389 Flush: Supported 00:10:01.389 Reservation: Not Supported 00:10:01.389 Namespace Sharing Capabilities: Private 00:10:01.389 Size (in LBAs): 1048576 (4GiB) 00:10:01.389 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.389 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.389 Thin Provisioning: Not Supported 00:10:01.389 Per-NS Atomic Units: No 00:10:01.389 Maximum Single Source Range Length: 128 00:10:01.389 Maximum Copy Length: 128 00:10:01.389 Maximum Source Range Count: 128 00:10:01.389 NGUID/EUI64 Never Reused: No 00:10:01.389 Namespace Write Protected: No 00:10:01.389 Number of LBA Formats: 8 00:10:01.389 Current LBA Format: LBA Format #04 00:10:01.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.389 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.389 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.389 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.389 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.389 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.389 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.389 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.389 00:10:01.389 NVM Specific Namespace Data 00:10:01.389 =========================== 00:10:01.389 Logical Block Storage Tag Mask: 0 00:10:01.389 Protection Information Capabilities: 00:10:01.389 16b Guard Protection Information Storage Tag Support: No 00:10:01.389 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.389 Storage Tag Check Read Support: No 00:10:01.389 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.389 20:26:55 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:01.389 20:26:55 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:01.648 ===================================================== 00:10:01.648 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:01.648 ===================================================== 00:10:01.648 Controller Capabilities/Features 00:10:01.648 ================================ 00:10:01.648 Vendor ID: 1b36 00:10:01.648 Subsystem Vendor ID: 1af4 00:10:01.648 Serial Number: 12343 00:10:01.648 Model Number: QEMU NVMe Ctrl 00:10:01.648 Firmware Version: 8.0.0 00:10:01.648 Recommended Arb Burst: 6 00:10:01.648 IEEE OUI Identifier: 00 54 52 00:10:01.648 Multi-path I/O 00:10:01.648 May have multiple subsystem ports: No 00:10:01.648 May have multiple controllers: Yes 00:10:01.648 Associated with SR-IOV VF: No 00:10:01.648 Max Data Transfer Size: 524288 00:10:01.648 Max Number of Namespaces: 256 00:10:01.648 Max Number of I/O Queues: 64 00:10:01.648 NVMe Specification Version (VS): 1.4 00:10:01.648 NVMe Specification Version (Identify): 1.4 00:10:01.648 Maximum Queue Entries: 2048 00:10:01.648 Contiguous Queues Required: Yes 00:10:01.648 Arbitration Mechanisms Supported 00:10:01.648 Weighted Round Robin: Not Supported 00:10:01.648 Vendor Specific: Not Supported 00:10:01.648 Reset Timeout: 7500 ms 00:10:01.648 Doorbell Stride: 4 bytes 00:10:01.648 NVM Subsystem Reset: Not Supported 00:10:01.648 Command Sets Supported 00:10:01.648 NVM Command Set: Supported 00:10:01.648 Boot Partition: Not Supported 00:10:01.648 Memory Page Size Minimum: 4096 bytes 00:10:01.648 Memory Page Size Maximum: 65536 bytes 00:10:01.648 Persistent Memory Region: Not Supported 00:10:01.648 Optional Asynchronous Events Supported 00:10:01.648 Namespace Attribute Notices: Supported 00:10:01.648 Firmware Activation Notices: Not Supported 00:10:01.648 ANA Change Notices: Not Supported 00:10:01.648 PLE Aggregate Log Change Notices: Not Supported 00:10:01.648 LBA Status Info Alert Notices: Not Supported 00:10:01.648 EGE Aggregate Log Change Notices: Not Supported 00:10:01.648 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.648 Zone Descriptor Change Notices: Not Supported 00:10:01.648 Discovery Log Change Notices: Not Supported 00:10:01.648 Controller Attributes 00:10:01.648 128-bit Host Identifier: Not Supported 00:10:01.648 Non-Operational Permissive Mode: Not Supported 00:10:01.648 NVM Sets: Not Supported 00:10:01.648 Read Recovery Levels: Not Supported 00:10:01.648 Endurance Groups: Supported 00:10:01.648 Predictable Latency Mode: Not Supported 00:10:01.648 Traffic Based Keep ALive: Not Supported 00:10:01.648 Namespace Granularity: Not Supported 00:10:01.648 SQ Associations: Not Supported 00:10:01.648 UUID List: Not Supported 00:10:01.648 Multi-Domain Subsystem: Not Supported 00:10:01.648 Fixed Capacity Management: Not Supported 00:10:01.648 Variable Capacity Management: Not Supported 00:10:01.648 Delete Endurance Group: Not Supported 00:10:01.648 Delete NVM Set: Not Supported 00:10:01.648 Extended LBA Formats Supported: Supported 00:10:01.648 Flexible Data Placement Supported: Supported 00:10:01.648 00:10:01.648 Controller Memory Buffer Support 00:10:01.648 ================================ 00:10:01.648 Supported: No 00:10:01.648 00:10:01.648 Persistent Memory Region Support 00:10:01.648 ================================ 00:10:01.648 Supported: No 00:10:01.648 00:10:01.648 Admin Command Set Attributes 00:10:01.648 ============================ 00:10:01.648 Security Send/Receive: Not Supported 00:10:01.648 Format NVM: Supported 00:10:01.648 Firmware Activate/Download: Not Supported 00:10:01.648 Namespace Management: Supported 00:10:01.648 Device Self-Test: Not Supported 00:10:01.648 Directives: Supported 00:10:01.648 NVMe-MI: Not Supported 00:10:01.648 Virtualization Management: Not Supported 00:10:01.648 Doorbell Buffer Config: Supported 00:10:01.648 Get LBA Status Capability: Not Supported 00:10:01.648 Command & Feature Lockdown Capability: Not Supported 00:10:01.648 Abort Command Limit: 4 00:10:01.648 Async Event Request Limit: 4 00:10:01.648 Number of Firmware Slots: N/A 00:10:01.648 Firmware Slot 1 Read-Only: N/A 00:10:01.648 Firmware Activation Without Reset: N/A 00:10:01.648 Multiple Update Detection Support: N/A 00:10:01.648 Firmware Update Granularity: No Information Provided 00:10:01.648 Per-Namespace SMART Log: Yes 00:10:01.648 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.648 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:01.648 Command Effects Log Page: Supported 00:10:01.648 Get Log Page Extended Data: Supported 00:10:01.648 Telemetry Log Pages: Not Supported 00:10:01.648 Persistent Event Log Pages: Not Supported 00:10:01.648 Supported Log Pages Log Page: May Support 00:10:01.648 Commands Supported & Effects Log Page: Not Supported 00:10:01.648 Feature Identifiers & Effects Log Page:May Support 00:10:01.648 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.648 Data Area 4 for Telemetry Log: Not Supported 00:10:01.648 Error Log Page Entries Supported: 1 00:10:01.648 Keep Alive: Not Supported 00:10:01.648 00:10:01.648 NVM Command Set Attributes 00:10:01.648 ========================== 00:10:01.648 Submission Queue Entry Size 00:10:01.648 Max: 64 00:10:01.648 Min: 64 00:10:01.649 Completion Queue Entry Size 00:10:01.649 Max: 16 00:10:01.649 Min: 16 00:10:01.649 Number of Namespaces: 256 00:10:01.649 Compare Command: Supported 00:10:01.649 Write Uncorrectable Command: Not Supported 00:10:01.649 Dataset Management Command: Supported 00:10:01.649 Write Zeroes Command: Supported 00:10:01.649 Set Features Save Field: Supported 00:10:01.649 Reservations: Not Supported 00:10:01.649 Timestamp: Supported 00:10:01.649 Copy: Supported 00:10:01.649 Volatile Write Cache: Present 00:10:01.649 Atomic Write Unit (Normal): 1 00:10:01.649 Atomic Write Unit (PFail): 1 00:10:01.649 Atomic Compare & Write Unit: 1 00:10:01.649 Fused Compare & Write: Not Supported 00:10:01.649 Scatter-Gather List 00:10:01.649 SGL Command Set: Supported 00:10:01.649 SGL Keyed: Not Supported 00:10:01.649 SGL Bit Bucket Descriptor: Not Supported 00:10:01.649 SGL Metadata Pointer: Not Supported 00:10:01.649 Oversized SGL: Not Supported 00:10:01.649 SGL Metadata Address: Not Supported 00:10:01.649 SGL Offset: Not Supported 00:10:01.649 Transport SGL Data Block: Not Supported 00:10:01.649 Replay Protected Memory Block: Not Supported 00:10:01.649 00:10:01.649 Firmware Slot Information 00:10:01.649 ========================= 00:10:01.649 Active slot: 1 00:10:01.649 Slot 1 Firmware Revision: 1.0 00:10:01.649 00:10:01.649 00:10:01.649 Commands Supported and Effects 00:10:01.649 ============================== 00:10:01.649 Admin Commands 00:10:01.649 -------------- 00:10:01.649 Delete I/O Submission Queue (00h): Supported 00:10:01.649 Create I/O Submission Queue (01h): Supported 00:10:01.649 Get Log Page (02h): Supported 00:10:01.649 Delete I/O Completion Queue (04h): Supported 00:10:01.649 Create I/O Completion Queue (05h): Supported 00:10:01.649 Identify (06h): Supported 00:10:01.649 Abort (08h): Supported 00:10:01.649 Set Features (09h): Supported 00:10:01.649 Get Features (0Ah): Supported 00:10:01.649 Asynchronous Event Request (0Ch): Supported 00:10:01.649 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.649 Directive Send (19h): Supported 00:10:01.649 Directive Receive (1Ah): Supported 00:10:01.649 Virtualization Management (1Ch): Supported 00:10:01.649 Doorbell Buffer Config (7Ch): Supported 00:10:01.649 Format NVM (80h): Supported LBA-Change 00:10:01.649 I/O Commands 00:10:01.649 ------------ 00:10:01.649 Flush (00h): Supported LBA-Change 00:10:01.649 Write (01h): Supported LBA-Change 00:10:01.649 Read (02h): Supported 00:10:01.649 Compare (05h): Supported 00:10:01.649 Write Zeroes (08h): Supported LBA-Change 00:10:01.649 Dataset Management (09h): Supported LBA-Change 00:10:01.649 Unknown (0Ch): Supported 00:10:01.649 Unknown (12h): Supported 00:10:01.649 Copy (19h): Supported LBA-Change 00:10:01.649 Unknown (1Dh): Supported LBA-Change 00:10:01.649 00:10:01.649 Error Log 00:10:01.649 ========= 00:10:01.649 00:10:01.649 Arbitration 00:10:01.649 =========== 00:10:01.649 Arbitration Burst: no limit 00:10:01.649 00:10:01.649 Power Management 00:10:01.649 ================ 00:10:01.649 Number of Power States: 1 00:10:01.649 Current Power State: Power State #0 00:10:01.649 Power State #0: 00:10:01.649 Max Power: 25.00 W 00:10:01.649 Non-Operational State: Operational 00:10:01.649 Entry Latency: 16 microseconds 00:10:01.649 Exit Latency: 4 microseconds 00:10:01.649 Relative Read Throughput: 0 00:10:01.649 Relative Read Latency: 0 00:10:01.649 Relative Write Throughput: 0 00:10:01.649 Relative Write Latency: 0 00:10:01.649 Idle Power: Not Reported 00:10:01.649 Active Power: Not Reported 00:10:01.649 Non-Operational Permissive Mode: Not Supported 00:10:01.649 00:10:01.649 Health Information 00:10:01.649 ================== 00:10:01.649 Critical Warnings: 00:10:01.649 Available Spare Space: OK 00:10:01.649 Temperature: OK 00:10:01.649 Device Reliability: OK 00:10:01.649 Read Only: No 00:10:01.649 Volatile Memory Backup: OK 00:10:01.649 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.649 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.649 Available Spare: 0% 00:10:01.649 Available Spare Threshold: 0% 00:10:01.649 Life Percentage Used: 0% 00:10:01.649 Data Units Read: 781 00:10:01.649 Data Units Written: 675 00:10:01.649 Host Read Commands: 33197 00:10:01.649 Host Write Commands: 31787 00:10:01.649 Controller Busy Time: 0 minutes 00:10:01.649 Power Cycles: 0 00:10:01.649 Power On Hours: 0 hours 00:10:01.649 Unsafe Shutdowns: 0 00:10:01.649 Unrecoverable Media Errors: 0 00:10:01.649 Lifetime Error Log Entries: 0 00:10:01.649 Warning Temperature Time: 0 minutes 00:10:01.649 Critical Temperature Time: 0 minutes 00:10:01.649 00:10:01.649 Number of Queues 00:10:01.649 ================ 00:10:01.649 Number of I/O Submission Queues: 64 00:10:01.649 Number of I/O Completion Queues: 64 00:10:01.649 00:10:01.649 ZNS Specific Controller Data 00:10:01.649 ============================ 00:10:01.649 Zone Append Size Limit: 0 00:10:01.649 00:10:01.649 00:10:01.649 Active Namespaces 00:10:01.649 ================= 00:10:01.649 Namespace ID:1 00:10:01.649 Error Recovery Timeout: Unlimited 00:10:01.649 Command Set Identifier: NVM (00h) 00:10:01.649 Deallocate: Supported 00:10:01.649 Deallocated/Unwritten Error: Supported 00:10:01.649 Deallocated Read Value: All 0x00 00:10:01.650 Deallocate in Write Zeroes: Not Supported 00:10:01.650 Deallocated Guard Field: 0xFFFF 00:10:01.650 Flush: Supported 00:10:01.650 Reservation: Not Supported 00:10:01.650 Namespace Sharing Capabilities: Multiple Controllers 00:10:01.650 Size (in LBAs): 262144 (1GiB) 00:10:01.650 Capacity (in LBAs): 262144 (1GiB) 00:10:01.650 Utilization (in LBAs): 262144 (1GiB) 00:10:01.650 Thin Provisioning: Not Supported 00:10:01.650 Per-NS Atomic Units: No 00:10:01.650 Maximum Single Source Range Length: 128 00:10:01.650 Maximum Copy Length: 128 00:10:01.650 Maximum Source Range Count: 128 00:10:01.650 NGUID/EUI64 Never Reused: No 00:10:01.650 Namespace Write Protected: No 00:10:01.650 Endurance group ID: 1 00:10:01.650 Number of LBA Formats: 8 00:10:01.650 Current LBA Format: LBA Format #04 00:10:01.650 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.650 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.650 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.650 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.650 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.650 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.650 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.650 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.650 00:10:01.650 Get Feature FDP: 00:10:01.650 ================ 00:10:01.650 Enabled: Yes 00:10:01.650 FDP configuration index: 0 00:10:01.650 00:10:01.650 FDP configurations log page 00:10:01.650 =========================== 00:10:01.650 Number of FDP configurations: 1 00:10:01.650 Version: 0 00:10:01.650 Size: 112 00:10:01.650 FDP Configuration Descriptor: 0 00:10:01.650 Descriptor Size: 96 00:10:01.650 Reclaim Group Identifier format: 2 00:10:01.650 FDP Volatile Write Cache: Not Present 00:10:01.650 FDP Configuration: Valid 00:10:01.650 Vendor Specific Size: 0 00:10:01.650 Number of Reclaim Groups: 2 00:10:01.650 Number of Recalim Unit Handles: 8 00:10:01.650 Max Placement Identifiers: 128 00:10:01.650 Number of Namespaces Suppprted: 256 00:10:01.650 Reclaim unit Nominal Size: 6000000 bytes 00:10:01.650 Estimated Reclaim Unit Time Limit: Not Reported 00:10:01.650 RUH Desc #000: RUH Type: Initially Isolated 00:10:01.650 RUH Desc #001: RUH Type: Initially Isolated 00:10:01.650 RUH Desc #002: RUH Type: Initially Isolated 00:10:01.650 RUH Desc #003: RUH Type: Initially Isolated 00:10:01.650 RUH Desc #004: RUH Type: Initially Isolated 00:10:01.650 RUH Desc #005: RUH Type: Initially Isolated 00:10:01.650 RUH Desc #006: RUH Type: Initially Isolated 00:10:01.650 RUH Desc #007: RUH Type: Initially Isolated 00:10:01.650 00:10:01.650 FDP reclaim unit handle usage log page 00:10:01.650 ====================================== 00:10:01.650 Number of Reclaim Unit Handles: 8 00:10:01.650 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:01.650 RUH Usage Desc #001: RUH Attributes: Unused 00:10:01.650 RUH Usage Desc #002: RUH Attributes: Unused 00:10:01.650 RUH Usage Desc #003: RUH Attributes: Unused 00:10:01.650 RUH Usage Desc #004: RUH Attributes: Unused 00:10:01.650 RUH Usage Desc #005: RUH Attributes: Unused 00:10:01.650 RUH Usage Desc #006: RUH Attributes: Unused 00:10:01.650 RUH Usage Desc #007: RUH Attributes: Unused 00:10:01.650 00:10:01.650 FDP statistics log page 00:10:01.650 ======================= 00:10:01.650 Host bytes with metadata written: 416849920 00:10:01.650 Media bytes with metadata written: 416894976 00:10:01.650 Media bytes erased: 0 00:10:01.650 00:10:01.650 FDP events log page 00:10:01.650 =================== 00:10:01.650 Number of FDP events: 0 00:10:01.650 00:10:01.650 NVM Specific Namespace Data 00:10:01.650 =========================== 00:10:01.650 Logical Block Storage Tag Mask: 0 00:10:01.650 Protection Information Capabilities: 00:10:01.650 16b Guard Protection Information Storage Tag Support: No 00:10:01.650 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.650 Storage Tag Check Read Support: No 00:10:01.650 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.650 00:10:01.650 real 0m1.521s 00:10:01.650 user 0m0.564s 00:10:01.650 sys 0m0.762s 00:10:01.650 20:26:55 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.650 20:26:55 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:01.650 ************************************ 00:10:01.650 END TEST nvme_identify 00:10:01.650 ************************************ 00:10:01.909 20:26:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:01.909 20:26:55 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:01.909 20:26:55 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:01.909 20:26:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.909 20:26:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:01.909 ************************************ 00:10:01.909 START TEST nvme_perf 00:10:01.909 ************************************ 00:10:01.909 20:26:55 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:10:01.909 20:26:55 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:03.288 Initializing NVMe Controllers 00:10:03.288 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:03.288 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:03.288 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:03.288 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:03.288 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:03.288 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:03.288 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:03.288 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:03.288 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:03.288 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:03.288 Initialization complete. Launching workers. 00:10:03.288 ======================================================== 00:10:03.288 Latency(us) 00:10:03.288 Device Information : IOPS MiB/s Average min max 00:10:03.288 PCIE (0000:00:10.0) NSID 1 from core 0: 11943.61 139.96 10722.10 8995.65 34033.50 00:10:03.288 PCIE (0000:00:11.0) NSID 1 from core 0: 11943.61 139.96 10709.24 8880.63 33220.46 00:10:03.288 PCIE (0000:00:13.0) NSID 1 from core 0: 11943.61 139.96 10693.64 7877.19 32832.85 00:10:03.288 PCIE (0000:00:12.0) NSID 1 from core 0: 11943.61 139.96 10677.48 7163.78 31808.12 00:10:03.288 PCIE (0000:00:12.0) NSID 2 from core 0: 11943.61 139.96 10661.33 6362.30 30518.20 00:10:03.288 PCIE (0000:00:12.0) NSID 3 from core 0: 11943.61 139.96 10645.30 5475.82 29326.95 00:10:03.288 ======================================================== 00:10:03.288 Total : 71661.67 839.79 10684.85 5475.82 34033.50 00:10:03.288 00:10:03.288 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:03.288 ================================================================================= 00:10:03.288 1.00000% : 9353.775us 00:10:03.288 10.00000% : 9651.665us 00:10:03.288 25.00000% : 10009.135us 00:10:03.288 50.00000% : 10426.182us 00:10:03.288 75.00000% : 10902.807us 00:10:03.288 90.00000% : 11558.167us 00:10:03.288 95.00000% : 12332.684us 00:10:03.288 98.00000% : 14179.607us 00:10:03.288 99.00000% : 23712.116us 00:10:03.288 99.50000% : 31933.905us 00:10:03.288 99.90000% : 33840.407us 00:10:03.288 99.99000% : 34078.720us 00:10:03.288 99.99900% : 34078.720us 00:10:03.288 99.99990% : 34078.720us 00:10:03.288 99.99999% : 34078.720us 00:10:03.288 00:10:03.288 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:03.288 ================================================================================= 00:10:03.288 1.00000% : 9413.353us 00:10:03.288 10.00000% : 9770.822us 00:10:03.288 25.00000% : 10009.135us 00:10:03.288 50.00000% : 10426.182us 00:10:03.288 75.00000% : 10843.229us 00:10:03.288 90.00000% : 11498.589us 00:10:03.288 95.00000% : 12332.684us 00:10:03.288 98.00000% : 14239.185us 00:10:03.288 99.00000% : 23473.804us 00:10:03.288 99.50000% : 31457.280us 00:10:03.288 99.90000% : 32887.156us 00:10:03.288 99.99000% : 33363.782us 00:10:03.288 99.99900% : 33363.782us 00:10:03.288 99.99990% : 33363.782us 00:10:03.288 99.99999% : 33363.782us 00:10:03.288 00:10:03.288 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:03.288 ================================================================================= 00:10:03.288 1.00000% : 9353.775us 00:10:03.288 10.00000% : 9711.244us 00:10:03.288 25.00000% : 10009.135us 00:10:03.288 50.00000% : 10426.182us 00:10:03.288 75.00000% : 10902.807us 00:10:03.288 90.00000% : 11439.011us 00:10:03.288 95.00000% : 12213.527us 00:10:03.288 98.00000% : 13822.138us 00:10:03.288 99.00000% : 22878.022us 00:10:03.288 99.50000% : 30980.655us 00:10:03.288 99.90000% : 32648.844us 00:10:03.288 99.99000% : 32887.156us 00:10:03.288 99.99900% : 32887.156us 00:10:03.288 99.99990% : 32887.156us 00:10:03.288 99.99999% : 32887.156us 00:10:03.288 00:10:03.288 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:03.288 ================================================================================= 00:10:03.288 1.00000% : 9413.353us 00:10:03.288 10.00000% : 9770.822us 00:10:03.288 25.00000% : 10009.135us 00:10:03.288 50.00000% : 10426.182us 00:10:03.288 75.00000% : 10843.229us 00:10:03.288 90.00000% : 11439.011us 00:10:03.288 95.00000% : 12153.949us 00:10:03.288 98.00000% : 13822.138us 00:10:03.288 99.00000% : 22043.927us 00:10:03.288 99.50000% : 30265.716us 00:10:03.288 99.90000% : 31695.593us 00:10:03.288 99.99000% : 31933.905us 00:10:03.288 99.99900% : 31933.905us 00:10:03.288 99.99990% : 31933.905us 00:10:03.288 99.99999% : 31933.905us 00:10:03.288 00:10:03.288 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:03.288 ================================================================================= 00:10:03.288 1.00000% : 9353.775us 00:10:03.288 10.00000% : 9770.822us 00:10:03.288 25.00000% : 10009.135us 00:10:03.288 50.00000% : 10426.182us 00:10:03.289 75.00000% : 10902.807us 00:10:03.289 90.00000% : 11439.011us 00:10:03.289 95.00000% : 12094.371us 00:10:03.289 98.00000% : 14000.873us 00:10:03.289 99.00000% : 21209.833us 00:10:03.289 99.50000% : 29074.153us 00:10:03.289 99.90000% : 30265.716us 00:10:03.289 99.99000% : 30504.029us 00:10:03.289 99.99900% : 30742.342us 00:10:03.289 99.99990% : 30742.342us 00:10:03.289 99.99999% : 30742.342us 00:10:03.289 00:10:03.289 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:03.289 ================================================================================= 00:10:03.289 1.00000% : 9353.775us 00:10:03.289 10.00000% : 9770.822us 00:10:03.289 25.00000% : 10009.135us 00:10:03.289 50.00000% : 10426.182us 00:10:03.289 75.00000% : 10843.229us 00:10:03.289 90.00000% : 11439.011us 00:10:03.289 95.00000% : 12094.371us 00:10:03.289 98.00000% : 14120.029us 00:10:03.289 99.00000% : 20494.895us 00:10:03.289 99.50000% : 28001.745us 00:10:03.289 99.90000% : 29074.153us 00:10:03.289 99.99000% : 29312.465us 00:10:03.289 99.99900% : 29431.622us 00:10:03.289 99.99990% : 29431.622us 00:10:03.289 99.99999% : 29431.622us 00:10:03.289 00:10:03.289 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:03.289 ============================================================================== 00:10:03.289 Range in us Cumulative IO count 00:10:03.289 8936.727 - 8996.305: 0.0084% ( 1) 00:10:03.289 8996.305 - 9055.884: 0.0418% ( 4) 00:10:03.289 9055.884 - 9115.462: 0.1671% ( 15) 00:10:03.289 9115.462 - 9175.040: 0.3008% ( 16) 00:10:03.289 9175.040 - 9234.618: 0.5264% ( 27) 00:10:03.289 9234.618 - 9294.196: 0.9860% ( 55) 00:10:03.289 9294.196 - 9353.775: 1.6461% ( 79) 00:10:03.289 9353.775 - 9413.353: 2.5986% ( 114) 00:10:03.289 9413.353 - 9472.931: 4.0441% ( 173) 00:10:03.289 9472.931 - 9532.509: 5.9325% ( 226) 00:10:03.289 9532.509 - 9592.087: 7.9545% ( 242) 00:10:03.289 9592.087 - 9651.665: 10.1521% ( 263) 00:10:03.289 9651.665 - 9711.244: 12.8844% ( 327) 00:10:03.289 9711.244 - 9770.822: 15.6334% ( 329) 00:10:03.289 9770.822 - 9830.400: 18.4074% ( 332) 00:10:03.289 9830.400 - 9889.978: 21.3402% ( 351) 00:10:03.289 9889.978 - 9949.556: 24.3650% ( 362) 00:10:03.289 9949.556 - 10009.135: 27.3897% ( 362) 00:10:03.289 10009.135 - 10068.713: 30.5147% ( 374) 00:10:03.289 10068.713 - 10128.291: 33.5227% ( 360) 00:10:03.289 10128.291 - 10187.869: 36.7146% ( 382) 00:10:03.289 10187.869 - 10247.447: 40.1070% ( 406) 00:10:03.289 10247.447 - 10307.025: 43.4743% ( 403) 00:10:03.289 10307.025 - 10366.604: 47.1925% ( 445) 00:10:03.289 10366.604 - 10426.182: 50.9108% ( 445) 00:10:03.289 10426.182 - 10485.760: 54.5455% ( 435) 00:10:03.289 10485.760 - 10545.338: 58.0799% ( 423) 00:10:03.289 10545.338 - 10604.916: 61.6143% ( 423) 00:10:03.289 10604.916 - 10664.495: 64.7142% ( 371) 00:10:03.289 10664.495 - 10724.073: 67.7975% ( 369) 00:10:03.289 10724.073 - 10783.651: 70.7136% ( 349) 00:10:03.289 10783.651 - 10843.229: 73.3790% ( 319) 00:10:03.289 10843.229 - 10902.807: 75.7771% ( 287) 00:10:03.289 10902.807 - 10962.385: 77.9328% ( 258) 00:10:03.289 10962.385 - 11021.964: 79.7209% ( 214) 00:10:03.289 11021.964 - 11081.542: 81.3670% ( 197) 00:10:03.289 11081.542 - 11141.120: 82.7874% ( 170) 00:10:03.289 11141.120 - 11200.698: 84.1995% ( 169) 00:10:03.289 11200.698 - 11260.276: 85.3693% ( 140) 00:10:03.289 11260.276 - 11319.855: 86.6143% ( 149) 00:10:03.289 11319.855 - 11379.433: 87.7757% ( 139) 00:10:03.289 11379.433 - 11439.011: 88.7450% ( 116) 00:10:03.289 11439.011 - 11498.589: 89.7309% ( 118) 00:10:03.289 11498.589 - 11558.167: 90.5414% ( 97) 00:10:03.289 11558.167 - 11617.745: 91.2600% ( 86) 00:10:03.289 11617.745 - 11677.324: 91.9034% ( 77) 00:10:03.289 11677.324 - 11736.902: 92.3630% ( 55) 00:10:03.289 11736.902 - 11796.480: 92.8559% ( 59) 00:10:03.289 11796.480 - 11856.058: 93.1651% ( 37) 00:10:03.289 11856.058 - 11915.636: 93.4659% ( 36) 00:10:03.289 11915.636 - 11975.215: 93.7584% ( 35) 00:10:03.289 11975.215 - 12034.793: 94.0090% ( 30) 00:10:03.289 12034.793 - 12094.371: 94.2346% ( 27) 00:10:03.289 12094.371 - 12153.949: 94.4853% ( 30) 00:10:03.289 12153.949 - 12213.527: 94.7109% ( 27) 00:10:03.289 12213.527 - 12273.105: 94.9616% ( 30) 00:10:03.289 12273.105 - 12332.684: 95.1370% ( 21) 00:10:03.289 12332.684 - 12392.262: 95.3543% ( 26) 00:10:03.289 12392.262 - 12451.840: 95.5130% ( 19) 00:10:03.289 12451.840 - 12511.418: 95.6885% ( 21) 00:10:03.289 12511.418 - 12570.996: 95.9057% ( 26) 00:10:03.289 12570.996 - 12630.575: 96.0561% ( 18) 00:10:03.289 12630.575 - 12690.153: 96.2400% ( 22) 00:10:03.289 12690.153 - 12749.731: 96.3820% ( 17) 00:10:03.289 12749.731 - 12809.309: 96.5826% ( 24) 00:10:03.289 12809.309 - 12868.887: 96.7330% ( 18) 00:10:03.289 12868.887 - 12928.465: 96.8750% ( 17) 00:10:03.289 12928.465 - 12988.044: 96.9669% ( 11) 00:10:03.289 12988.044 - 13047.622: 97.0588% ( 11) 00:10:03.289 13047.622 - 13107.200: 97.1090% ( 6) 00:10:03.289 13107.200 - 13166.778: 97.1591% ( 6) 00:10:03.289 13166.778 - 13226.356: 97.1842% ( 3) 00:10:03.289 13226.356 - 13285.935: 97.2259% ( 5) 00:10:03.289 13285.935 - 13345.513: 97.2510% ( 3) 00:10:03.289 13345.513 - 13405.091: 97.2928% ( 5) 00:10:03.289 13405.091 - 13464.669: 97.3346% ( 5) 00:10:03.289 13464.669 - 13524.247: 97.3763% ( 5) 00:10:03.289 13524.247 - 13583.825: 97.4014% ( 3) 00:10:03.289 13583.825 - 13643.404: 97.4515% ( 6) 00:10:03.289 13643.404 - 13702.982: 97.4850% ( 4) 00:10:03.289 13702.982 - 13762.560: 97.5351% ( 6) 00:10:03.289 13762.560 - 13822.138: 97.6019% ( 8) 00:10:03.289 13822.138 - 13881.716: 97.6688% ( 8) 00:10:03.289 13881.716 - 13941.295: 97.7189% ( 6) 00:10:03.289 13941.295 - 14000.873: 97.7858% ( 8) 00:10:03.289 14000.873 - 14060.451: 97.8693% ( 10) 00:10:03.289 14060.451 - 14120.029: 97.9362% ( 8) 00:10:03.289 14120.029 - 14179.607: 98.0197% ( 10) 00:10:03.289 14179.607 - 14239.185: 98.0699% ( 6) 00:10:03.289 14239.185 - 14298.764: 98.1367% ( 8) 00:10:03.289 14298.764 - 14358.342: 98.1785% ( 5) 00:10:03.289 14358.342 - 14417.920: 98.2286% ( 6) 00:10:03.289 14417.920 - 14477.498: 98.2787% ( 6) 00:10:03.289 14477.498 - 14537.076: 98.3289% ( 6) 00:10:03.289 14537.076 - 14596.655: 98.3539% ( 3) 00:10:03.289 14596.655 - 14656.233: 98.3874% ( 4) 00:10:03.289 14656.233 - 14715.811: 98.4291% ( 5) 00:10:03.289 14715.811 - 14775.389: 98.4793% ( 6) 00:10:03.289 14775.389 - 14834.967: 98.5043% ( 3) 00:10:03.289 14834.967 - 14894.545: 98.5712% ( 8) 00:10:03.289 14894.545 - 14954.124: 98.5963% ( 3) 00:10:03.289 14954.124 - 15013.702: 98.6213% ( 3) 00:10:03.289 15013.702 - 15073.280: 98.6631% ( 5) 00:10:03.289 15073.280 - 15132.858: 98.7132% ( 6) 00:10:03.289 15132.858 - 15192.436: 98.7634% ( 6) 00:10:03.289 15192.436 - 15252.015: 98.7717% ( 1) 00:10:03.289 15252.015 - 15371.171: 98.8135% ( 5) 00:10:03.289 15371.171 - 15490.327: 98.8636% ( 6) 00:10:03.289 15490.327 - 15609.484: 98.8887% ( 3) 00:10:03.289 15609.484 - 15728.640: 98.9305% ( 5) 00:10:03.289 23235.491 - 23354.647: 98.9388% ( 1) 00:10:03.289 23354.647 - 23473.804: 98.9639% ( 3) 00:10:03.289 23473.804 - 23592.960: 98.9806% ( 2) 00:10:03.289 23592.960 - 23712.116: 99.0140% ( 4) 00:10:03.289 23712.116 - 23831.273: 99.0391% ( 3) 00:10:03.289 23831.273 - 23950.429: 99.0725% ( 4) 00:10:03.289 23950.429 - 24069.585: 99.0976% ( 3) 00:10:03.289 24069.585 - 24188.742: 99.1310% ( 4) 00:10:03.289 24188.742 - 24307.898: 99.1644% ( 4) 00:10:03.289 24307.898 - 24427.055: 99.1979% ( 4) 00:10:03.289 24427.055 - 24546.211: 99.2146% ( 2) 00:10:03.289 24546.211 - 24665.367: 99.2396% ( 3) 00:10:03.289 24665.367 - 24784.524: 99.2731% ( 4) 00:10:03.289 24784.524 - 24903.680: 99.2981% ( 3) 00:10:03.289 24903.680 - 25022.836: 99.3316% ( 4) 00:10:03.289 25022.836 - 25141.993: 99.3650% ( 4) 00:10:03.289 25141.993 - 25261.149: 99.3900% ( 3) 00:10:03.289 25380.305 - 25499.462: 99.4318% ( 5) 00:10:03.289 25499.462 - 25618.618: 99.4652% ( 4) 00:10:03.289 31695.593 - 31933.905: 99.5070% ( 5) 00:10:03.289 31933.905 - 32172.218: 99.5655% ( 7) 00:10:03.289 32172.218 - 32410.531: 99.6240% ( 7) 00:10:03.289 32410.531 - 32648.844: 99.6741% ( 6) 00:10:03.289 32648.844 - 32887.156: 99.7243% ( 6) 00:10:03.289 32887.156 - 33125.469: 99.7911% ( 8) 00:10:03.289 33125.469 - 33363.782: 99.8412% ( 6) 00:10:03.289 33363.782 - 33602.095: 99.8914% ( 6) 00:10:03.289 33602.095 - 33840.407: 99.9582% ( 8) 00:10:03.289 33840.407 - 34078.720: 100.0000% ( 5) 00:10:03.289 00:10:03.289 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:03.289 ============================================================================== 00:10:03.289 Range in us Cumulative IO count 00:10:03.289 8877.149 - 8936.727: 0.0334% ( 4) 00:10:03.289 8936.727 - 8996.305: 0.0668% ( 4) 00:10:03.289 8996.305 - 9055.884: 0.1003% ( 4) 00:10:03.289 9055.884 - 9115.462: 0.1253% ( 3) 00:10:03.289 9115.462 - 9175.040: 0.2005% ( 9) 00:10:03.289 9175.040 - 9234.618: 0.3092% ( 13) 00:10:03.289 9234.618 - 9294.196: 0.4596% ( 18) 00:10:03.289 9294.196 - 9353.775: 0.7604% ( 36) 00:10:03.289 9353.775 - 9413.353: 1.3369% ( 69) 00:10:03.289 9413.353 - 9472.931: 2.2059% ( 104) 00:10:03.289 9472.931 - 9532.509: 3.3506% ( 137) 00:10:03.289 9532.509 - 9592.087: 5.0217% ( 200) 00:10:03.289 9592.087 - 9651.665: 7.1691% ( 257) 00:10:03.289 9651.665 - 9711.244: 9.7092% ( 304) 00:10:03.289 9711.244 - 9770.822: 12.6003% ( 346) 00:10:03.289 9770.822 - 9830.400: 15.7921% ( 382) 00:10:03.289 9830.400 - 9889.978: 19.0592% ( 391) 00:10:03.289 9889.978 - 9949.556: 22.6354% ( 428) 00:10:03.289 9949.556 - 10009.135: 26.1698% ( 423) 00:10:03.289 10009.135 - 10068.713: 29.8379% ( 439) 00:10:03.289 10068.713 - 10128.291: 33.5729% ( 447) 00:10:03.289 10128.291 - 10187.869: 37.3162% ( 448) 00:10:03.289 10187.869 - 10247.447: 41.1598% ( 460) 00:10:03.289 10247.447 - 10307.025: 44.9783% ( 457) 00:10:03.289 10307.025 - 10366.604: 48.7968% ( 457) 00:10:03.289 10366.604 - 10426.182: 52.6237% ( 458) 00:10:03.289 10426.182 - 10485.760: 56.5174% ( 466) 00:10:03.290 10485.760 - 10545.338: 60.1270% ( 432) 00:10:03.290 10545.338 - 10604.916: 63.6781% ( 425) 00:10:03.290 10604.916 - 10664.495: 66.9703% ( 394) 00:10:03.290 10664.495 - 10724.073: 70.0451% ( 368) 00:10:03.290 10724.073 - 10783.651: 72.8443% ( 335) 00:10:03.290 10783.651 - 10843.229: 75.2841% ( 292) 00:10:03.290 10843.229 - 10902.807: 77.4649% ( 261) 00:10:03.290 10902.807 - 10962.385: 79.2447% ( 213) 00:10:03.290 10962.385 - 11021.964: 80.9576% ( 205) 00:10:03.290 11021.964 - 11081.542: 82.4950% ( 184) 00:10:03.290 11081.542 - 11141.120: 83.9906% ( 179) 00:10:03.290 11141.120 - 11200.698: 85.4612% ( 176) 00:10:03.290 11200.698 - 11260.276: 86.7313% ( 152) 00:10:03.290 11260.276 - 11319.855: 87.8593% ( 135) 00:10:03.290 11319.855 - 11379.433: 88.8453% ( 118) 00:10:03.290 11379.433 - 11439.011: 89.7727% ( 111) 00:10:03.290 11439.011 - 11498.589: 90.5665% ( 95) 00:10:03.290 11498.589 - 11558.167: 91.2182% ( 78) 00:10:03.290 11558.167 - 11617.745: 91.6862% ( 56) 00:10:03.290 11617.745 - 11677.324: 91.9953% ( 37) 00:10:03.290 11677.324 - 11736.902: 92.3045% ( 37) 00:10:03.290 11736.902 - 11796.480: 92.5802% ( 33) 00:10:03.290 11796.480 - 11856.058: 92.8142% ( 28) 00:10:03.290 11856.058 - 11915.636: 93.0565% ( 29) 00:10:03.290 11915.636 - 11975.215: 93.3072% ( 30) 00:10:03.290 11975.215 - 12034.793: 93.5745% ( 32) 00:10:03.290 12034.793 - 12094.371: 93.8670% ( 35) 00:10:03.290 12094.371 - 12153.949: 94.1678% ( 36) 00:10:03.290 12153.949 - 12213.527: 94.4602% ( 35) 00:10:03.290 12213.527 - 12273.105: 94.7443% ( 34) 00:10:03.290 12273.105 - 12332.684: 95.0451% ( 36) 00:10:03.290 12332.684 - 12392.262: 95.3209% ( 33) 00:10:03.290 12392.262 - 12451.840: 95.6217% ( 36) 00:10:03.290 12451.840 - 12511.418: 95.9475% ( 39) 00:10:03.290 12511.418 - 12570.996: 96.2149% ( 32) 00:10:03.290 12570.996 - 12630.575: 96.4823% ( 32) 00:10:03.290 12630.575 - 12690.153: 96.7497% ( 32) 00:10:03.290 12690.153 - 12749.731: 96.9335% ( 22) 00:10:03.290 12749.731 - 12809.309: 97.0338% ( 12) 00:10:03.290 12809.309 - 12868.887: 97.1340% ( 12) 00:10:03.290 12868.887 - 12928.465: 97.2426% ( 13) 00:10:03.290 12928.465 - 12988.044: 97.3095% ( 8) 00:10:03.290 12988.044 - 13047.622: 97.3847% ( 9) 00:10:03.290 13047.622 - 13107.200: 97.4599% ( 9) 00:10:03.290 13107.200 - 13166.778: 97.5267% ( 8) 00:10:03.290 13166.778 - 13226.356: 97.5936% ( 8) 00:10:03.290 13226.356 - 13285.935: 97.6354% ( 5) 00:10:03.290 13285.935 - 13345.513: 97.6688% ( 4) 00:10:03.290 13345.513 - 13405.091: 97.7273% ( 7) 00:10:03.290 13405.091 - 13464.669: 97.7607% ( 4) 00:10:03.290 13464.669 - 13524.247: 97.8108% ( 6) 00:10:03.290 13524.247 - 13583.825: 97.8443% ( 4) 00:10:03.290 13583.825 - 13643.404: 97.8610% ( 2) 00:10:03.290 13881.716 - 13941.295: 97.8860% ( 3) 00:10:03.290 13941.295 - 14000.873: 97.9111% ( 3) 00:10:03.290 14000.873 - 14060.451: 97.9362% ( 3) 00:10:03.290 14060.451 - 14120.029: 97.9612% ( 3) 00:10:03.290 14120.029 - 14179.607: 97.9947% ( 4) 00:10:03.290 14179.607 - 14239.185: 98.0197% ( 3) 00:10:03.290 14239.185 - 14298.764: 98.0448% ( 3) 00:10:03.290 14298.764 - 14358.342: 98.0699% ( 3) 00:10:03.290 14358.342 - 14417.920: 98.0949% ( 3) 00:10:03.290 14417.920 - 14477.498: 98.1200% ( 3) 00:10:03.290 14477.498 - 14537.076: 98.1785% ( 7) 00:10:03.290 14537.076 - 14596.655: 98.2203% ( 5) 00:10:03.290 14596.655 - 14656.233: 98.2704% ( 6) 00:10:03.290 14656.233 - 14715.811: 98.3122% ( 5) 00:10:03.290 14715.811 - 14775.389: 98.3707% ( 7) 00:10:03.290 14775.389 - 14834.967: 98.4291% ( 7) 00:10:03.290 14834.967 - 14894.545: 98.4876% ( 7) 00:10:03.290 14894.545 - 14954.124: 98.5378% ( 6) 00:10:03.290 14954.124 - 15013.702: 98.5879% ( 6) 00:10:03.290 15013.702 - 15073.280: 98.6464% ( 7) 00:10:03.290 15073.280 - 15132.858: 98.6798% ( 4) 00:10:03.290 15132.858 - 15192.436: 98.7049% ( 3) 00:10:03.290 15192.436 - 15252.015: 98.7383% ( 4) 00:10:03.290 15252.015 - 15371.171: 98.7884% ( 6) 00:10:03.290 15371.171 - 15490.327: 98.8469% ( 7) 00:10:03.290 15490.327 - 15609.484: 98.9054% ( 7) 00:10:03.290 15609.484 - 15728.640: 98.9305% ( 3) 00:10:03.290 23116.335 - 23235.491: 98.9639% ( 4) 00:10:03.290 23235.491 - 23354.647: 98.9973% ( 4) 00:10:03.290 23354.647 - 23473.804: 99.0307% ( 4) 00:10:03.290 23473.804 - 23592.960: 99.0642% ( 4) 00:10:03.290 23592.960 - 23712.116: 99.0892% ( 3) 00:10:03.290 23712.116 - 23831.273: 99.1227% ( 4) 00:10:03.290 23831.273 - 23950.429: 99.1561% ( 4) 00:10:03.290 23950.429 - 24069.585: 99.1811% ( 3) 00:10:03.290 24069.585 - 24188.742: 99.2146% ( 4) 00:10:03.290 24188.742 - 24307.898: 99.2480% ( 4) 00:10:03.290 24307.898 - 24427.055: 99.2731% ( 3) 00:10:03.290 24427.055 - 24546.211: 99.3065% ( 4) 00:10:03.290 24546.211 - 24665.367: 99.3399% ( 4) 00:10:03.290 24665.367 - 24784.524: 99.3733% ( 4) 00:10:03.290 24784.524 - 24903.680: 99.4068% ( 4) 00:10:03.290 24903.680 - 25022.836: 99.4402% ( 4) 00:10:03.290 25022.836 - 25141.993: 99.4652% ( 3) 00:10:03.290 31218.967 - 31457.280: 99.5237% ( 7) 00:10:03.290 31457.280 - 31695.593: 99.5822% ( 7) 00:10:03.290 31695.593 - 31933.905: 99.6491% ( 8) 00:10:03.290 31933.905 - 32172.218: 99.7159% ( 8) 00:10:03.290 32172.218 - 32410.531: 99.7744% ( 7) 00:10:03.290 32410.531 - 32648.844: 99.8412% ( 8) 00:10:03.290 32648.844 - 32887.156: 99.9081% ( 8) 00:10:03.290 32887.156 - 33125.469: 99.9749% ( 8) 00:10:03.290 33125.469 - 33363.782: 100.0000% ( 3) 00:10:03.290 00:10:03.290 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:03.290 ============================================================================== 00:10:03.290 Range in us Cumulative IO count 00:10:03.290 7864.320 - 7923.898: 0.0251% ( 3) 00:10:03.290 7923.898 - 7983.476: 0.0585% ( 4) 00:10:03.290 7983.476 - 8043.055: 0.0919% ( 4) 00:10:03.290 8043.055 - 8102.633: 0.1253% ( 4) 00:10:03.290 8102.633 - 8162.211: 0.1588% ( 4) 00:10:03.290 8162.211 - 8221.789: 0.1922% ( 4) 00:10:03.290 8221.789 - 8281.367: 0.2256% ( 4) 00:10:03.290 8281.367 - 8340.945: 0.2590% ( 4) 00:10:03.290 8340.945 - 8400.524: 0.2924% ( 4) 00:10:03.290 8400.524 - 8460.102: 0.3259% ( 4) 00:10:03.290 8460.102 - 8519.680: 0.3593% ( 4) 00:10:03.290 8519.680 - 8579.258: 0.3927% ( 4) 00:10:03.290 8579.258 - 8638.836: 0.4261% ( 4) 00:10:03.290 8638.836 - 8698.415: 0.4596% ( 4) 00:10:03.290 8698.415 - 8757.993: 0.4846% ( 3) 00:10:03.290 8757.993 - 8817.571: 0.5097% ( 3) 00:10:03.290 8817.571 - 8877.149: 0.5348% ( 3) 00:10:03.290 9055.884 - 9115.462: 0.5431% ( 1) 00:10:03.290 9115.462 - 9175.040: 0.5682% ( 3) 00:10:03.290 9175.040 - 9234.618: 0.6016% ( 4) 00:10:03.290 9234.618 - 9294.196: 0.7520% ( 18) 00:10:03.290 9294.196 - 9353.775: 1.0027% ( 30) 00:10:03.290 9353.775 - 9413.353: 1.5458% ( 65) 00:10:03.290 9413.353 - 9472.931: 2.4900% ( 113) 00:10:03.290 9472.931 - 9532.509: 3.8102% ( 158) 00:10:03.290 9532.509 - 9592.087: 5.5147% ( 204) 00:10:03.290 9592.087 - 9651.665: 7.5618% ( 245) 00:10:03.290 9651.665 - 9711.244: 10.0685% ( 300) 00:10:03.290 9711.244 - 9770.822: 12.9011% ( 339) 00:10:03.290 9770.822 - 9830.400: 15.9592% ( 366) 00:10:03.290 9830.400 - 9889.978: 19.2848% ( 398) 00:10:03.290 9889.978 - 9949.556: 22.7356% ( 413) 00:10:03.290 9949.556 - 10009.135: 26.2533% ( 421) 00:10:03.290 10009.135 - 10068.713: 29.6791% ( 410) 00:10:03.290 10068.713 - 10128.291: 33.3222% ( 436) 00:10:03.290 10128.291 - 10187.869: 36.9485% ( 434) 00:10:03.290 10187.869 - 10247.447: 40.7169% ( 451) 00:10:03.290 10247.447 - 10307.025: 44.4853% ( 451) 00:10:03.290 10307.025 - 10366.604: 48.2453% ( 450) 00:10:03.290 10366.604 - 10426.182: 52.2226% ( 476) 00:10:03.290 10426.182 - 10485.760: 55.8489% ( 434) 00:10:03.290 10485.760 - 10545.338: 59.2998% ( 413) 00:10:03.290 10545.338 - 10604.916: 62.7841% ( 417) 00:10:03.290 10604.916 - 10664.495: 66.1263% ( 400) 00:10:03.290 10664.495 - 10724.073: 69.2931% ( 379) 00:10:03.290 10724.073 - 10783.651: 72.0839% ( 334) 00:10:03.290 10783.651 - 10843.229: 74.6156% ( 303) 00:10:03.290 10843.229 - 10902.807: 76.8466% ( 267) 00:10:03.290 10902.807 - 10962.385: 78.8770% ( 243) 00:10:03.290 10962.385 - 11021.964: 80.7821% ( 228) 00:10:03.290 11021.964 - 11081.542: 82.5033% ( 206) 00:10:03.290 11081.542 - 11141.120: 84.0742% ( 188) 00:10:03.290 11141.120 - 11200.698: 85.5699% ( 179) 00:10:03.290 11200.698 - 11260.276: 87.0404% ( 176) 00:10:03.290 11260.276 - 11319.855: 88.3690% ( 159) 00:10:03.290 11319.855 - 11379.433: 89.6056% ( 148) 00:10:03.290 11379.433 - 11439.011: 90.5414% ( 112) 00:10:03.290 11439.011 - 11498.589: 91.3519% ( 97) 00:10:03.290 11498.589 - 11558.167: 92.0287% ( 81) 00:10:03.290 11558.167 - 11617.745: 92.5969% ( 68) 00:10:03.290 11617.745 - 11677.324: 93.0899% ( 59) 00:10:03.290 11677.324 - 11736.902: 93.4576% ( 44) 00:10:03.290 11736.902 - 11796.480: 93.7249% ( 32) 00:10:03.290 11796.480 - 11856.058: 93.9923% ( 32) 00:10:03.290 11856.058 - 11915.636: 94.2346% ( 29) 00:10:03.290 11915.636 - 11975.215: 94.4101% ( 21) 00:10:03.290 11975.215 - 12034.793: 94.5856% ( 21) 00:10:03.290 12034.793 - 12094.371: 94.7360% ( 18) 00:10:03.290 12094.371 - 12153.949: 94.8947% ( 19) 00:10:03.290 12153.949 - 12213.527: 95.0869% ( 23) 00:10:03.290 12213.527 - 12273.105: 95.2791% ( 23) 00:10:03.290 12273.105 - 12332.684: 95.4629% ( 22) 00:10:03.290 12332.684 - 12392.262: 95.6634% ( 24) 00:10:03.290 12392.262 - 12451.840: 95.8138% ( 18) 00:10:03.290 12451.840 - 12511.418: 95.9893% ( 21) 00:10:03.290 12511.418 - 12570.996: 96.1230% ( 16) 00:10:03.290 12570.996 - 12630.575: 96.2650% ( 17) 00:10:03.290 12630.575 - 12690.153: 96.3653% ( 12) 00:10:03.290 12690.153 - 12749.731: 96.4990% ( 16) 00:10:03.290 12749.731 - 12809.309: 96.6076% ( 13) 00:10:03.290 12809.309 - 12868.887: 96.7497% ( 17) 00:10:03.290 12868.887 - 12928.465: 96.8750% ( 15) 00:10:03.290 12928.465 - 12988.044: 97.0003% ( 15) 00:10:03.290 12988.044 - 13047.622: 97.0839% ( 10) 00:10:03.290 13047.622 - 13107.200: 97.1842% ( 12) 00:10:03.290 13107.200 - 13166.778: 97.2928% ( 13) 00:10:03.290 13166.778 - 13226.356: 97.3930% ( 12) 00:10:03.290 13226.356 - 13285.935: 97.4933% ( 12) 00:10:03.290 13285.935 - 13345.513: 97.5602% ( 8) 00:10:03.290 13345.513 - 13405.091: 97.6186% ( 7) 00:10:03.291 13405.091 - 13464.669: 97.6771% ( 7) 00:10:03.291 13464.669 - 13524.247: 97.7440% ( 8) 00:10:03.291 13524.247 - 13583.825: 97.7858% ( 5) 00:10:03.291 13583.825 - 13643.404: 97.8443% ( 7) 00:10:03.291 13643.404 - 13702.982: 97.8944% ( 6) 00:10:03.291 13702.982 - 13762.560: 97.9529% ( 7) 00:10:03.291 13762.560 - 13822.138: 98.0114% ( 7) 00:10:03.291 13822.138 - 13881.716: 98.0615% ( 6) 00:10:03.291 13881.716 - 13941.295: 98.0866% ( 3) 00:10:03.291 13941.295 - 14000.873: 98.1200% ( 4) 00:10:03.291 14000.873 - 14060.451: 98.1451% ( 3) 00:10:03.291 14060.451 - 14120.029: 98.1701% ( 3) 00:10:03.291 14120.029 - 14179.607: 98.1952% ( 3) 00:10:03.291 14179.607 - 14239.185: 98.2286% ( 4) 00:10:03.291 14239.185 - 14298.764: 98.2537% ( 3) 00:10:03.291 14298.764 - 14358.342: 98.2787% ( 3) 00:10:03.291 14358.342 - 14417.920: 98.3122% ( 4) 00:10:03.291 14417.920 - 14477.498: 98.3372% ( 3) 00:10:03.291 14477.498 - 14537.076: 98.3623% ( 3) 00:10:03.291 14537.076 - 14596.655: 98.3874% ( 3) 00:10:03.291 14596.655 - 14656.233: 98.4124% ( 3) 00:10:03.291 14656.233 - 14715.811: 98.4459% ( 4) 00:10:03.291 14715.811 - 14775.389: 98.4793% ( 4) 00:10:03.291 14775.389 - 14834.967: 98.5043% ( 3) 00:10:03.291 14834.967 - 14894.545: 98.5211% ( 2) 00:10:03.291 14894.545 - 14954.124: 98.5545% ( 4) 00:10:03.291 14954.124 - 15013.702: 98.5795% ( 3) 00:10:03.291 15013.702 - 15073.280: 98.6046% ( 3) 00:10:03.291 15073.280 - 15132.858: 98.6297% ( 3) 00:10:03.291 15132.858 - 15192.436: 98.6547% ( 3) 00:10:03.291 15192.436 - 15252.015: 98.6882% ( 4) 00:10:03.291 15252.015 - 15371.171: 98.7383% ( 6) 00:10:03.291 15371.171 - 15490.327: 98.7968% ( 7) 00:10:03.291 15490.327 - 15609.484: 98.8553% ( 7) 00:10:03.291 15609.484 - 15728.640: 98.9054% ( 6) 00:10:03.291 15728.640 - 15847.796: 98.9305% ( 3) 00:10:03.291 22401.396 - 22520.553: 98.9388% ( 1) 00:10:03.291 22520.553 - 22639.709: 98.9639% ( 3) 00:10:03.291 22639.709 - 22758.865: 98.9973% ( 4) 00:10:03.291 22758.865 - 22878.022: 99.0391% ( 5) 00:10:03.291 22878.022 - 22997.178: 99.0642% ( 3) 00:10:03.291 22997.178 - 23116.335: 99.0976% ( 4) 00:10:03.291 23116.335 - 23235.491: 99.1310% ( 4) 00:10:03.291 23235.491 - 23354.647: 99.1561% ( 3) 00:10:03.291 23354.647 - 23473.804: 99.1895% ( 4) 00:10:03.291 23473.804 - 23592.960: 99.2146% ( 3) 00:10:03.291 23592.960 - 23712.116: 99.2396% ( 3) 00:10:03.291 23712.116 - 23831.273: 99.2731% ( 4) 00:10:03.291 23831.273 - 23950.429: 99.2981% ( 3) 00:10:03.291 23950.429 - 24069.585: 99.3316% ( 4) 00:10:03.291 24069.585 - 24188.742: 99.3650% ( 4) 00:10:03.291 24188.742 - 24307.898: 99.3984% ( 4) 00:10:03.291 24307.898 - 24427.055: 99.4318% ( 4) 00:10:03.291 24427.055 - 24546.211: 99.4652% ( 4) 00:10:03.291 30742.342 - 30980.655: 99.5237% ( 7) 00:10:03.291 30980.655 - 31218.967: 99.5822% ( 7) 00:10:03.291 31218.967 - 31457.280: 99.6491% ( 8) 00:10:03.291 31457.280 - 31695.593: 99.6992% ( 6) 00:10:03.291 31695.593 - 31933.905: 99.7577% ( 7) 00:10:03.291 31933.905 - 32172.218: 99.8162% ( 7) 00:10:03.291 32172.218 - 32410.531: 99.8830% ( 8) 00:10:03.291 32410.531 - 32648.844: 99.9499% ( 8) 00:10:03.291 32648.844 - 32887.156: 100.0000% ( 6) 00:10:03.291 00:10:03.291 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:03.291 ============================================================================== 00:10:03.291 Range in us Cumulative IO count 00:10:03.291 7149.382 - 7179.171: 0.0084% ( 1) 00:10:03.291 7179.171 - 7208.960: 0.0251% ( 2) 00:10:03.291 7208.960 - 7238.749: 0.0418% ( 2) 00:10:03.291 7238.749 - 7268.538: 0.0668% ( 3) 00:10:03.291 7268.538 - 7298.327: 0.0836% ( 2) 00:10:03.291 7298.327 - 7328.116: 0.1003% ( 2) 00:10:03.291 7328.116 - 7357.905: 0.1170% ( 2) 00:10:03.291 7357.905 - 7387.695: 0.1337% ( 2) 00:10:03.291 7387.695 - 7417.484: 0.1504% ( 2) 00:10:03.291 7417.484 - 7447.273: 0.1671% ( 2) 00:10:03.291 7447.273 - 7477.062: 0.1838% ( 2) 00:10:03.291 7477.062 - 7506.851: 0.1922% ( 1) 00:10:03.291 7506.851 - 7536.640: 0.2089% ( 2) 00:10:03.291 7566.429 - 7596.218: 0.2256% ( 2) 00:10:03.291 7596.218 - 7626.007: 0.2340% ( 1) 00:10:03.291 7626.007 - 7685.585: 0.2674% ( 4) 00:10:03.291 7685.585 - 7745.164: 0.3008% ( 4) 00:10:03.291 7745.164 - 7804.742: 0.3342% ( 4) 00:10:03.291 7804.742 - 7864.320: 0.3676% ( 4) 00:10:03.291 7864.320 - 7923.898: 0.4011% ( 4) 00:10:03.291 7923.898 - 7983.476: 0.4345% ( 4) 00:10:03.291 7983.476 - 8043.055: 0.4679% ( 4) 00:10:03.291 8043.055 - 8102.633: 0.5013% ( 4) 00:10:03.291 8102.633 - 8162.211: 0.5348% ( 4) 00:10:03.291 9055.884 - 9115.462: 0.5515% ( 2) 00:10:03.291 9115.462 - 9175.040: 0.5765% ( 3) 00:10:03.291 9175.040 - 9234.618: 0.6434% ( 8) 00:10:03.291 9234.618 - 9294.196: 0.7520% ( 13) 00:10:03.291 9294.196 - 9353.775: 0.9693% ( 26) 00:10:03.291 9353.775 - 9413.353: 1.4539% ( 58) 00:10:03.291 9413.353 - 9472.931: 2.2811% ( 99) 00:10:03.291 9472.931 - 9532.509: 3.4843% ( 144) 00:10:03.291 9532.509 - 9592.087: 4.9382% ( 174) 00:10:03.291 9592.087 - 9651.665: 6.8934% ( 234) 00:10:03.291 9651.665 - 9711.244: 9.2831% ( 286) 00:10:03.291 9711.244 - 9770.822: 12.2911% ( 360) 00:10:03.291 9770.822 - 9830.400: 15.5665% ( 392) 00:10:03.291 9830.400 - 9889.978: 18.9589% ( 406) 00:10:03.291 9889.978 - 9949.556: 22.5602% ( 431) 00:10:03.291 9949.556 - 10009.135: 26.1113% ( 425) 00:10:03.291 10009.135 - 10068.713: 29.7711% ( 438) 00:10:03.291 10068.713 - 10128.291: 33.4225% ( 437) 00:10:03.291 10128.291 - 10187.869: 37.1825% ( 450) 00:10:03.291 10187.869 - 10247.447: 40.8924% ( 444) 00:10:03.291 10247.447 - 10307.025: 44.7276% ( 459) 00:10:03.291 10307.025 - 10366.604: 48.5461% ( 457) 00:10:03.291 10366.604 - 10426.182: 52.3061% ( 450) 00:10:03.291 10426.182 - 10485.760: 56.1163% ( 456) 00:10:03.291 10485.760 - 10545.338: 59.7761% ( 438) 00:10:03.291 10545.338 - 10604.916: 63.3857% ( 432) 00:10:03.291 10604.916 - 10664.495: 66.8199% ( 411) 00:10:03.291 10664.495 - 10724.073: 69.9114% ( 370) 00:10:03.291 10724.073 - 10783.651: 72.7273% ( 337) 00:10:03.291 10783.651 - 10843.229: 75.1588% ( 291) 00:10:03.291 10843.229 - 10902.807: 77.4315% ( 272) 00:10:03.291 10902.807 - 10962.385: 79.3533% ( 230) 00:10:03.291 10962.385 - 11021.964: 81.1247% ( 212) 00:10:03.291 11021.964 - 11081.542: 82.7289% ( 192) 00:10:03.291 11081.542 - 11141.120: 84.2580% ( 183) 00:10:03.291 11141.120 - 11200.698: 85.7453% ( 178) 00:10:03.291 11200.698 - 11260.276: 87.1574% ( 169) 00:10:03.291 11260.276 - 11319.855: 88.4943% ( 160) 00:10:03.291 11319.855 - 11379.433: 89.7142% ( 146) 00:10:03.291 11379.433 - 11439.011: 90.6918% ( 117) 00:10:03.291 11439.011 - 11498.589: 91.5274% ( 100) 00:10:03.291 11498.589 - 11558.167: 92.1290% ( 72) 00:10:03.291 11558.167 - 11617.745: 92.6471% ( 62) 00:10:03.291 11617.745 - 11677.324: 93.0732% ( 51) 00:10:03.291 11677.324 - 11736.902: 93.4325% ( 43) 00:10:03.291 11736.902 - 11796.480: 93.7584% ( 39) 00:10:03.291 11796.480 - 11856.058: 94.0508% ( 35) 00:10:03.291 11856.058 - 11915.636: 94.2931% ( 29) 00:10:03.291 11915.636 - 11975.215: 94.5271% ( 28) 00:10:03.291 11975.215 - 12034.793: 94.7694% ( 29) 00:10:03.291 12034.793 - 12094.371: 94.9699% ( 24) 00:10:03.291 12094.371 - 12153.949: 95.1788% ( 25) 00:10:03.291 12153.949 - 12213.527: 95.3626% ( 22) 00:10:03.291 12213.527 - 12273.105: 95.5548% ( 23) 00:10:03.291 12273.105 - 12332.684: 95.7219% ( 20) 00:10:03.291 12332.684 - 12392.262: 95.8556% ( 16) 00:10:03.291 12392.262 - 12451.840: 96.0144% ( 19) 00:10:03.291 12451.840 - 12511.418: 96.1648% ( 18) 00:10:03.291 12511.418 - 12570.996: 96.3068% ( 17) 00:10:03.291 12570.996 - 12630.575: 96.4154% ( 13) 00:10:03.291 12630.575 - 12690.153: 96.4990% ( 10) 00:10:03.291 12690.153 - 12749.731: 96.5491% ( 6) 00:10:03.291 12749.731 - 12809.309: 96.6327% ( 10) 00:10:03.291 12809.309 - 12868.887: 96.7246% ( 11) 00:10:03.291 12868.887 - 12928.465: 96.7998% ( 9) 00:10:03.291 12928.465 - 12988.044: 96.8834% ( 10) 00:10:03.291 12988.044 - 13047.622: 96.9669% ( 10) 00:10:03.291 13047.622 - 13107.200: 97.0421% ( 9) 00:10:03.291 13107.200 - 13166.778: 97.1507% ( 13) 00:10:03.291 13166.778 - 13226.356: 97.2343% ( 10) 00:10:03.291 13226.356 - 13285.935: 97.3346% ( 12) 00:10:03.291 13285.935 - 13345.513: 97.4014% ( 8) 00:10:03.291 13345.513 - 13405.091: 97.4766% ( 9) 00:10:03.291 13405.091 - 13464.669: 97.5685% ( 11) 00:10:03.291 13464.669 - 13524.247: 97.6354% ( 8) 00:10:03.291 13524.247 - 13583.825: 97.7273% ( 11) 00:10:03.291 13583.825 - 13643.404: 97.7858% ( 7) 00:10:03.291 13643.404 - 13702.982: 97.8693% ( 10) 00:10:03.291 13702.982 - 13762.560: 97.9529% ( 10) 00:10:03.291 13762.560 - 13822.138: 98.0364% ( 10) 00:10:03.291 13822.138 - 13881.716: 98.1116% ( 9) 00:10:03.291 13881.716 - 13941.295: 98.1701% ( 7) 00:10:03.291 13941.295 - 14000.873: 98.2286% ( 7) 00:10:03.291 14000.873 - 14060.451: 98.2787% ( 6) 00:10:03.291 14060.451 - 14120.029: 98.3205% ( 5) 00:10:03.291 14120.029 - 14179.607: 98.3456% ( 3) 00:10:03.291 14179.607 - 14239.185: 98.3707% ( 3) 00:10:03.291 14239.185 - 14298.764: 98.3957% ( 3) 00:10:03.291 14596.655 - 14656.233: 98.4208% ( 3) 00:10:03.291 14656.233 - 14715.811: 98.4542% ( 4) 00:10:03.291 14715.811 - 14775.389: 98.4876% ( 4) 00:10:03.291 14775.389 - 14834.967: 98.5127% ( 3) 00:10:03.291 14834.967 - 14894.545: 98.5378% ( 3) 00:10:03.291 14894.545 - 14954.124: 98.5712% ( 4) 00:10:03.291 14954.124 - 15013.702: 98.5963% ( 3) 00:10:03.291 15013.702 - 15073.280: 98.6213% ( 3) 00:10:03.291 15073.280 - 15132.858: 98.6547% ( 4) 00:10:03.291 15132.858 - 15192.436: 98.6798% ( 3) 00:10:03.291 15192.436 - 15252.015: 98.7049% ( 3) 00:10:03.291 15252.015 - 15371.171: 98.7634% ( 7) 00:10:03.291 15371.171 - 15490.327: 98.8219% ( 7) 00:10:03.291 15490.327 - 15609.484: 98.8803% ( 7) 00:10:03.291 15609.484 - 15728.640: 98.9221% ( 5) 00:10:03.291 15728.640 - 15847.796: 98.9305% ( 1) 00:10:03.291 21686.458 - 21805.615: 98.9388% ( 1) 00:10:03.291 21805.615 - 21924.771: 98.9723% ( 4) 00:10:03.292 21924.771 - 22043.927: 99.0057% ( 4) 00:10:03.292 22043.927 - 22163.084: 99.0475% ( 5) 00:10:03.292 22163.084 - 22282.240: 99.0725% ( 3) 00:10:03.292 22282.240 - 22401.396: 99.0976% ( 3) 00:10:03.292 22401.396 - 22520.553: 99.1394% ( 5) 00:10:03.292 22520.553 - 22639.709: 99.1728% ( 4) 00:10:03.292 22639.709 - 22758.865: 99.1979% ( 3) 00:10:03.292 22758.865 - 22878.022: 99.2313% ( 4) 00:10:03.292 22878.022 - 22997.178: 99.2647% ( 4) 00:10:03.292 22997.178 - 23116.335: 99.2898% ( 3) 00:10:03.292 23116.335 - 23235.491: 99.3232% ( 4) 00:10:03.292 23235.491 - 23354.647: 99.3650% ( 5) 00:10:03.292 23354.647 - 23473.804: 99.3900% ( 3) 00:10:03.292 23473.804 - 23592.960: 99.4235% ( 4) 00:10:03.292 23592.960 - 23712.116: 99.4485% ( 3) 00:10:03.292 23712.116 - 23831.273: 99.4652% ( 2) 00:10:03.292 29908.247 - 30027.404: 99.4736% ( 1) 00:10:03.292 30027.404 - 30146.560: 99.4987% ( 3) 00:10:03.292 30146.560 - 30265.716: 99.5237% ( 3) 00:10:03.292 30265.716 - 30384.873: 99.5572% ( 4) 00:10:03.292 30384.873 - 30504.029: 99.5906% ( 4) 00:10:03.292 30504.029 - 30742.342: 99.6491% ( 7) 00:10:03.292 30742.342 - 30980.655: 99.7076% ( 7) 00:10:03.292 30980.655 - 31218.967: 99.7911% ( 10) 00:10:03.292 31218.967 - 31457.280: 99.8747% ( 10) 00:10:03.292 31457.280 - 31695.593: 99.9582% ( 10) 00:10:03.292 31695.593 - 31933.905: 100.0000% ( 5) 00:10:03.292 00:10:03.292 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:03.292 ============================================================================== 00:10:03.292 Range in us Cumulative IO count 00:10:03.292 6345.076 - 6374.865: 0.0167% ( 2) 00:10:03.292 6374.865 - 6404.655: 0.0251% ( 1) 00:10:03.292 6404.655 - 6434.444: 0.0418% ( 2) 00:10:03.292 6434.444 - 6464.233: 0.0668% ( 3) 00:10:03.292 6464.233 - 6494.022: 0.0836% ( 2) 00:10:03.292 6494.022 - 6523.811: 0.1003% ( 2) 00:10:03.292 6523.811 - 6553.600: 0.1170% ( 2) 00:10:03.292 6553.600 - 6583.389: 0.1253% ( 1) 00:10:03.292 6583.389 - 6613.178: 0.1420% ( 2) 00:10:03.292 6613.178 - 6642.967: 0.1588% ( 2) 00:10:03.292 6642.967 - 6672.756: 0.1755% ( 2) 00:10:03.292 6672.756 - 6702.545: 0.1922% ( 2) 00:10:03.292 6702.545 - 6732.335: 0.2089% ( 2) 00:10:03.292 6732.335 - 6762.124: 0.2256% ( 2) 00:10:03.292 6762.124 - 6791.913: 0.2423% ( 2) 00:10:03.292 6791.913 - 6821.702: 0.2590% ( 2) 00:10:03.292 6821.702 - 6851.491: 0.2757% ( 2) 00:10:03.292 6851.491 - 6881.280: 0.2924% ( 2) 00:10:03.292 6881.280 - 6911.069: 0.3092% ( 2) 00:10:03.292 6911.069 - 6940.858: 0.3259% ( 2) 00:10:03.292 6940.858 - 6970.647: 0.3426% ( 2) 00:10:03.292 6970.647 - 7000.436: 0.3593% ( 2) 00:10:03.292 7000.436 - 7030.225: 0.3760% ( 2) 00:10:03.292 7030.225 - 7060.015: 0.3927% ( 2) 00:10:03.292 7060.015 - 7089.804: 0.4094% ( 2) 00:10:03.292 7089.804 - 7119.593: 0.4261% ( 2) 00:10:03.292 7119.593 - 7149.382: 0.4428% ( 2) 00:10:03.292 7149.382 - 7179.171: 0.4596% ( 2) 00:10:03.292 7179.171 - 7208.960: 0.4763% ( 2) 00:10:03.292 7208.960 - 7238.749: 0.4846% ( 1) 00:10:03.292 7238.749 - 7268.538: 0.5013% ( 2) 00:10:03.292 7268.538 - 7298.327: 0.5180% ( 2) 00:10:03.292 7298.327 - 7328.116: 0.5348% ( 2) 00:10:03.292 9055.884 - 9115.462: 0.5932% ( 7) 00:10:03.292 9115.462 - 9175.040: 0.6434% ( 6) 00:10:03.292 9175.040 - 9234.618: 0.6852% ( 5) 00:10:03.292 9234.618 - 9294.196: 0.7938% ( 13) 00:10:03.292 9294.196 - 9353.775: 1.0528% ( 31) 00:10:03.292 9353.775 - 9413.353: 1.5876% ( 64) 00:10:03.292 9413.353 - 9472.931: 2.4983% ( 109) 00:10:03.292 9472.931 - 9532.509: 3.6848% ( 142) 00:10:03.292 9532.509 - 9592.087: 5.2557% ( 188) 00:10:03.292 9592.087 - 9651.665: 7.2610% ( 240) 00:10:03.292 9651.665 - 9711.244: 9.5922% ( 279) 00:10:03.292 9711.244 - 9770.822: 12.5084% ( 349) 00:10:03.292 9770.822 - 9830.400: 15.7670% ( 390) 00:10:03.292 9830.400 - 9889.978: 18.9004% ( 375) 00:10:03.292 9889.978 - 9949.556: 22.3178% ( 409) 00:10:03.292 9949.556 - 10009.135: 25.8606% ( 424) 00:10:03.292 10009.135 - 10068.713: 29.4034% ( 424) 00:10:03.292 10068.713 - 10128.291: 33.1049% ( 443) 00:10:03.292 10128.291 - 10187.869: 36.8065% ( 443) 00:10:03.292 10187.869 - 10247.447: 40.5414% ( 447) 00:10:03.292 10247.447 - 10307.025: 44.3432% ( 455) 00:10:03.292 10307.025 - 10366.604: 48.1200% ( 452) 00:10:03.292 10366.604 - 10426.182: 51.8633% ( 448) 00:10:03.292 10426.182 - 10485.760: 55.6400% ( 452) 00:10:03.292 10485.760 - 10545.338: 59.2998% ( 438) 00:10:03.292 10545.338 - 10604.916: 62.9846% ( 441) 00:10:03.292 10604.916 - 10664.495: 66.3937% ( 408) 00:10:03.292 10664.495 - 10724.073: 69.5605% ( 379) 00:10:03.292 10724.073 - 10783.651: 72.4014% ( 340) 00:10:03.292 10783.651 - 10843.229: 74.9248% ( 302) 00:10:03.292 10843.229 - 10902.807: 77.2142% ( 274) 00:10:03.292 10902.807 - 10962.385: 79.0859% ( 224) 00:10:03.292 10962.385 - 11021.964: 80.9241% ( 220) 00:10:03.292 11021.964 - 11081.542: 82.6036% ( 201) 00:10:03.292 11081.542 - 11141.120: 84.0826% ( 177) 00:10:03.292 11141.120 - 11200.698: 85.6033% ( 182) 00:10:03.292 11200.698 - 11260.276: 87.0488% ( 173) 00:10:03.292 11260.276 - 11319.855: 88.4442% ( 167) 00:10:03.292 11319.855 - 11379.433: 89.6223% ( 141) 00:10:03.292 11379.433 - 11439.011: 90.6166% ( 119) 00:10:03.292 11439.011 - 11498.589: 91.3937% ( 93) 00:10:03.292 11498.589 - 11558.167: 92.1039% ( 85) 00:10:03.292 11558.167 - 11617.745: 92.6805% ( 69) 00:10:03.292 11617.745 - 11677.324: 93.1568% ( 57) 00:10:03.292 11677.324 - 11736.902: 93.5411% ( 46) 00:10:03.292 11736.902 - 11796.480: 93.8168% ( 33) 00:10:03.292 11796.480 - 11856.058: 94.1176% ( 36) 00:10:03.292 11856.058 - 11915.636: 94.3600% ( 29) 00:10:03.292 11915.636 - 11975.215: 94.6273% ( 32) 00:10:03.292 11975.215 - 12034.793: 94.8780% ( 30) 00:10:03.292 12034.793 - 12094.371: 95.1287% ( 30) 00:10:03.292 12094.371 - 12153.949: 95.3626% ( 28) 00:10:03.292 12153.949 - 12213.527: 95.5966% ( 28) 00:10:03.292 12213.527 - 12273.105: 95.8055% ( 25) 00:10:03.292 12273.105 - 12332.684: 96.0227% ( 26) 00:10:03.292 12332.684 - 12392.262: 96.1731% ( 18) 00:10:03.292 12392.262 - 12451.840: 96.3068% ( 16) 00:10:03.292 12451.840 - 12511.418: 96.4238% ( 14) 00:10:03.292 12511.418 - 12570.996: 96.5324% ( 13) 00:10:03.292 12570.996 - 12630.575: 96.6160% ( 10) 00:10:03.292 12630.575 - 12690.153: 96.6828% ( 8) 00:10:03.292 12690.153 - 12749.731: 96.7580% ( 9) 00:10:03.292 12749.731 - 12809.309: 96.8165% ( 7) 00:10:03.292 12809.309 - 12868.887: 96.8834% ( 8) 00:10:03.292 12868.887 - 12928.465: 96.9418% ( 7) 00:10:03.292 12928.465 - 12988.044: 97.0003% ( 7) 00:10:03.292 12988.044 - 13047.622: 97.0421% ( 5) 00:10:03.292 13047.622 - 13107.200: 97.0922% ( 6) 00:10:03.292 13107.200 - 13166.778: 97.1257% ( 4) 00:10:03.292 13166.778 - 13226.356: 97.1507% ( 3) 00:10:03.292 13226.356 - 13285.935: 97.2009% ( 6) 00:10:03.292 13285.935 - 13345.513: 97.2594% ( 7) 00:10:03.292 13345.513 - 13405.091: 97.3095% ( 6) 00:10:03.292 13405.091 - 13464.669: 97.3763% ( 8) 00:10:03.292 13464.669 - 13524.247: 97.4515% ( 9) 00:10:03.292 13524.247 - 13583.825: 97.5267% ( 9) 00:10:03.292 13583.825 - 13643.404: 97.6019% ( 9) 00:10:03.292 13643.404 - 13702.982: 97.6437% ( 5) 00:10:03.292 13702.982 - 13762.560: 97.7356% ( 11) 00:10:03.292 13762.560 - 13822.138: 97.7941% ( 7) 00:10:03.292 13822.138 - 13881.716: 97.8860% ( 11) 00:10:03.292 13881.716 - 13941.295: 97.9696% ( 10) 00:10:03.292 13941.295 - 14000.873: 98.0699% ( 12) 00:10:03.292 14000.873 - 14060.451: 98.1367% ( 8) 00:10:03.292 14060.451 - 14120.029: 98.2119% ( 9) 00:10:03.292 14120.029 - 14179.607: 98.2787% ( 8) 00:10:03.292 14179.607 - 14239.185: 98.3623% ( 10) 00:10:03.292 14239.185 - 14298.764: 98.4375% ( 9) 00:10:03.292 14298.764 - 14358.342: 98.5294% ( 11) 00:10:03.292 14358.342 - 14417.920: 98.5879% ( 7) 00:10:03.292 14417.920 - 14477.498: 98.6380% ( 6) 00:10:03.292 14477.498 - 14537.076: 98.6882% ( 6) 00:10:03.292 14537.076 - 14596.655: 98.7383% ( 6) 00:10:03.292 14596.655 - 14656.233: 98.7717% ( 4) 00:10:03.292 14656.233 - 14715.811: 98.7968% ( 3) 00:10:03.292 14715.811 - 14775.389: 98.8219% ( 3) 00:10:03.292 14775.389 - 14834.967: 98.8469% ( 3) 00:10:03.292 14834.967 - 14894.545: 98.8720% ( 3) 00:10:03.292 14894.545 - 14954.124: 98.8971% ( 3) 00:10:03.292 14954.124 - 15013.702: 98.9138% ( 2) 00:10:03.292 15013.702 - 15073.280: 98.9305% ( 2) 00:10:03.292 20852.364 - 20971.520: 98.9388% ( 1) 00:10:03.292 20971.520 - 21090.676: 98.9639% ( 3) 00:10:03.292 21090.676 - 21209.833: 99.0057% ( 5) 00:10:03.292 21209.833 - 21328.989: 99.0391% ( 4) 00:10:03.292 21328.989 - 21448.145: 99.0642% ( 3) 00:10:03.292 21448.145 - 21567.302: 99.0976% ( 4) 00:10:03.292 21567.302 - 21686.458: 99.1310% ( 4) 00:10:03.292 21686.458 - 21805.615: 99.1561% ( 3) 00:10:03.292 21805.615 - 21924.771: 99.1895% ( 4) 00:10:03.292 21924.771 - 22043.927: 99.2229% ( 4) 00:10:03.292 22043.927 - 22163.084: 99.2564% ( 4) 00:10:03.292 22163.084 - 22282.240: 99.2898% ( 4) 00:10:03.292 22282.240 - 22401.396: 99.3232% ( 4) 00:10:03.292 22401.396 - 22520.553: 99.3566% ( 4) 00:10:03.292 22520.553 - 22639.709: 99.3817% ( 3) 00:10:03.292 22639.709 - 22758.865: 99.4151% ( 4) 00:10:03.292 22758.865 - 22878.022: 99.4402% ( 3) 00:10:03.292 22878.022 - 22997.178: 99.4652% ( 3) 00:10:03.292 28954.996 - 29074.153: 99.5070% ( 5) 00:10:03.292 29074.153 - 29193.309: 99.5488% ( 5) 00:10:03.292 29193.309 - 29312.465: 99.5989% ( 6) 00:10:03.292 29312.465 - 29431.622: 99.6240% ( 3) 00:10:03.292 29431.622 - 29550.778: 99.6658% ( 5) 00:10:03.292 29550.778 - 29669.935: 99.7076% ( 5) 00:10:03.292 29669.935 - 29789.091: 99.7493% ( 5) 00:10:03.292 29789.091 - 29908.247: 99.7911% ( 5) 00:10:03.292 29908.247 - 30027.404: 99.8245% ( 4) 00:10:03.292 30027.404 - 30146.560: 99.8663% ( 5) 00:10:03.292 30146.560 - 30265.716: 99.9081% ( 5) 00:10:03.292 30265.716 - 30384.873: 99.9499% ( 5) 00:10:03.292 30384.873 - 30504.029: 99.9916% ( 5) 00:10:03.292 30504.029 - 30742.342: 100.0000% ( 1) 00:10:03.292 00:10:03.292 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:03.292 ============================================================================== 00:10:03.293 Range in us Cumulative IO count 00:10:03.293 5451.404 - 5481.193: 0.0084% ( 1) 00:10:03.293 5481.193 - 5510.982: 0.0251% ( 2) 00:10:03.293 5510.982 - 5540.771: 0.0334% ( 1) 00:10:03.293 5540.771 - 5570.560: 0.0585% ( 3) 00:10:03.293 5570.560 - 5600.349: 0.0752% ( 2) 00:10:03.293 5600.349 - 5630.138: 0.0919% ( 2) 00:10:03.293 5630.138 - 5659.927: 0.1086% ( 2) 00:10:03.293 5659.927 - 5689.716: 0.1253% ( 2) 00:10:03.293 5689.716 - 5719.505: 0.1420% ( 2) 00:10:03.293 5719.505 - 5749.295: 0.1504% ( 1) 00:10:03.293 5749.295 - 5779.084: 0.1671% ( 2) 00:10:03.293 5779.084 - 5808.873: 0.1838% ( 2) 00:10:03.293 5808.873 - 5838.662: 0.2005% ( 2) 00:10:03.293 5838.662 - 5868.451: 0.2172% ( 2) 00:10:03.293 5868.451 - 5898.240: 0.2340% ( 2) 00:10:03.293 5898.240 - 5928.029: 0.2507% ( 2) 00:10:03.293 5928.029 - 5957.818: 0.2757% ( 3) 00:10:03.293 5957.818 - 5987.607: 0.2924% ( 2) 00:10:03.293 5987.607 - 6017.396: 0.3092% ( 2) 00:10:03.293 6017.396 - 6047.185: 0.3259% ( 2) 00:10:03.293 6047.185 - 6076.975: 0.3426% ( 2) 00:10:03.293 6076.975 - 6106.764: 0.3593% ( 2) 00:10:03.293 6106.764 - 6136.553: 0.3760% ( 2) 00:10:03.293 6136.553 - 6166.342: 0.3844% ( 1) 00:10:03.293 6166.342 - 6196.131: 0.4011% ( 2) 00:10:03.293 6196.131 - 6225.920: 0.4178% ( 2) 00:10:03.293 6225.920 - 6255.709: 0.4345% ( 2) 00:10:03.293 6255.709 - 6285.498: 0.4428% ( 1) 00:10:03.293 6285.498 - 6315.287: 0.4679% ( 3) 00:10:03.293 6315.287 - 6345.076: 0.4763% ( 1) 00:10:03.293 6345.076 - 6374.865: 0.4930% ( 2) 00:10:03.293 6374.865 - 6404.655: 0.5013% ( 1) 00:10:03.293 6404.655 - 6434.444: 0.5180% ( 2) 00:10:03.293 6434.444 - 6464.233: 0.5348% ( 2) 00:10:03.293 9055.884 - 9115.462: 0.5682% ( 4) 00:10:03.293 9115.462 - 9175.040: 0.6350% ( 8) 00:10:03.293 9175.040 - 9234.618: 0.6768% ( 5) 00:10:03.293 9234.618 - 9294.196: 0.8189% ( 17) 00:10:03.293 9294.196 - 9353.775: 1.0779% ( 31) 00:10:03.293 9353.775 - 9413.353: 1.5541% ( 57) 00:10:03.293 9413.353 - 9472.931: 2.3646% ( 97) 00:10:03.293 9472.931 - 9532.509: 3.5010% ( 136) 00:10:03.293 9532.509 - 9592.087: 5.0802% ( 189) 00:10:03.293 9592.087 - 9651.665: 7.1691% ( 250) 00:10:03.293 9651.665 - 9711.244: 9.5338% ( 283) 00:10:03.293 9711.244 - 9770.822: 12.4081% ( 344) 00:10:03.293 9770.822 - 9830.400: 15.5164% ( 372) 00:10:03.293 9830.400 - 9889.978: 18.9004% ( 405) 00:10:03.293 9889.978 - 9949.556: 22.3095% ( 408) 00:10:03.293 9949.556 - 10009.135: 25.9525% ( 436) 00:10:03.293 10009.135 - 10068.713: 29.6123% ( 438) 00:10:03.293 10068.713 - 10128.291: 33.2136% ( 431) 00:10:03.293 10128.291 - 10187.869: 36.9151% ( 443) 00:10:03.293 10187.869 - 10247.447: 40.6835% ( 451) 00:10:03.293 10247.447 - 10307.025: 44.3850% ( 443) 00:10:03.293 10307.025 - 10366.604: 48.2035% ( 457) 00:10:03.293 10366.604 - 10426.182: 52.0388% ( 459) 00:10:03.293 10426.182 - 10485.760: 55.7403% ( 443) 00:10:03.293 10485.760 - 10545.338: 59.4836% ( 448) 00:10:03.293 10545.338 - 10604.916: 63.1852% ( 443) 00:10:03.293 10604.916 - 10664.495: 66.5608% ( 404) 00:10:03.293 10664.495 - 10724.073: 69.7694% ( 384) 00:10:03.293 10724.073 - 10783.651: 72.5936% ( 338) 00:10:03.293 10783.651 - 10843.229: 75.2005% ( 312) 00:10:03.293 10843.229 - 10902.807: 77.3646% ( 259) 00:10:03.293 10902.807 - 10962.385: 79.2447% ( 225) 00:10:03.293 10962.385 - 11021.964: 80.9993% ( 210) 00:10:03.293 11021.964 - 11081.542: 82.6788% ( 201) 00:10:03.293 11081.542 - 11141.120: 84.1661% ( 178) 00:10:03.293 11141.120 - 11200.698: 85.6785% ( 181) 00:10:03.293 11200.698 - 11260.276: 87.0655% ( 166) 00:10:03.293 11260.276 - 11319.855: 88.4108% ( 161) 00:10:03.293 11319.855 - 11379.433: 89.5471% ( 136) 00:10:03.293 11379.433 - 11439.011: 90.6083% ( 127) 00:10:03.293 11439.011 - 11498.589: 91.4188% ( 97) 00:10:03.293 11498.589 - 11558.167: 92.0455% ( 75) 00:10:03.293 11558.167 - 11617.745: 92.5134% ( 56) 00:10:03.293 11617.745 - 11677.324: 92.9646% ( 54) 00:10:03.293 11677.324 - 11736.902: 93.3740% ( 49) 00:10:03.293 11736.902 - 11796.480: 93.7751% ( 48) 00:10:03.293 11796.480 - 11856.058: 94.1344% ( 43) 00:10:03.293 11856.058 - 11915.636: 94.4184% ( 34) 00:10:03.293 11915.636 - 11975.215: 94.6441% ( 27) 00:10:03.293 11975.215 - 12034.793: 94.8697% ( 27) 00:10:03.293 12034.793 - 12094.371: 95.1036% ( 28) 00:10:03.293 12094.371 - 12153.949: 95.3376% ( 28) 00:10:03.293 12153.949 - 12213.527: 95.5465% ( 25) 00:10:03.293 12213.527 - 12273.105: 95.7804% ( 28) 00:10:03.293 12273.105 - 12332.684: 95.9893% ( 25) 00:10:03.293 12332.684 - 12392.262: 96.2149% ( 27) 00:10:03.293 12392.262 - 12451.840: 96.3987% ( 22) 00:10:03.293 12451.840 - 12511.418: 96.5826% ( 22) 00:10:03.293 12511.418 - 12570.996: 96.7330% ( 18) 00:10:03.293 12570.996 - 12630.575: 96.8499% ( 14) 00:10:03.293 12630.575 - 12690.153: 96.9586% ( 13) 00:10:03.293 12690.153 - 12749.731: 97.0421% ( 10) 00:10:03.293 12749.731 - 12809.309: 97.1257% ( 10) 00:10:03.293 12809.309 - 12868.887: 97.1591% ( 4) 00:10:03.293 12868.887 - 12928.465: 97.1842% ( 3) 00:10:03.293 12928.465 - 12988.044: 97.2092% ( 3) 00:10:03.293 12988.044 - 13047.622: 97.2426% ( 4) 00:10:03.293 13047.622 - 13107.200: 97.2677% ( 3) 00:10:03.293 13107.200 - 13166.778: 97.2928% ( 3) 00:10:03.293 13166.778 - 13226.356: 97.3262% ( 4) 00:10:03.293 13226.356 - 13285.935: 97.3763% ( 6) 00:10:03.293 13285.935 - 13345.513: 97.4098% ( 4) 00:10:03.293 13345.513 - 13405.091: 97.4181% ( 1) 00:10:03.293 13405.091 - 13464.669: 97.4432% ( 3) 00:10:03.293 13464.669 - 13524.247: 97.4682% ( 3) 00:10:03.293 13524.247 - 13583.825: 97.4933% ( 3) 00:10:03.293 13583.825 - 13643.404: 97.5100% ( 2) 00:10:03.293 13643.404 - 13702.982: 97.5351% ( 3) 00:10:03.293 13702.982 - 13762.560: 97.5685% ( 4) 00:10:03.293 13762.560 - 13822.138: 97.6354% ( 8) 00:10:03.293 13822.138 - 13881.716: 97.7106% ( 9) 00:10:03.293 13881.716 - 13941.295: 97.7941% ( 10) 00:10:03.293 13941.295 - 14000.873: 97.8777% ( 10) 00:10:03.293 14000.873 - 14060.451: 97.9529% ( 9) 00:10:03.293 14060.451 - 14120.029: 98.0197% ( 8) 00:10:03.293 14120.029 - 14179.607: 98.1116% ( 11) 00:10:03.293 14179.607 - 14239.185: 98.1868% ( 9) 00:10:03.293 14239.185 - 14298.764: 98.2704% ( 10) 00:10:03.293 14298.764 - 14358.342: 98.3456% ( 9) 00:10:03.293 14358.342 - 14417.920: 98.4208% ( 9) 00:10:03.293 14417.920 - 14477.498: 98.4793% ( 7) 00:10:03.293 14477.498 - 14537.076: 98.5461% ( 8) 00:10:03.293 14537.076 - 14596.655: 98.6046% ( 7) 00:10:03.293 14596.655 - 14656.233: 98.6547% ( 6) 00:10:03.293 14656.233 - 14715.811: 98.7049% ( 6) 00:10:03.293 14715.811 - 14775.389: 98.7634% ( 7) 00:10:03.293 14775.389 - 14834.967: 98.8135% ( 6) 00:10:03.293 14834.967 - 14894.545: 98.8803% ( 8) 00:10:03.293 14894.545 - 14954.124: 98.9054% ( 3) 00:10:03.293 14954.124 - 15013.702: 98.9305% ( 3) 00:10:03.293 20137.425 - 20256.582: 98.9472% ( 2) 00:10:03.293 20256.582 - 20375.738: 98.9806% ( 4) 00:10:03.293 20375.738 - 20494.895: 99.0224% ( 5) 00:10:03.293 20494.895 - 20614.051: 99.0558% ( 4) 00:10:03.293 20614.051 - 20733.207: 99.0725% ( 2) 00:10:03.293 20733.207 - 20852.364: 99.1059% ( 4) 00:10:03.293 20852.364 - 20971.520: 99.1477% ( 5) 00:10:03.293 20971.520 - 21090.676: 99.1728% ( 3) 00:10:03.293 21090.676 - 21209.833: 99.2062% ( 4) 00:10:03.293 21209.833 - 21328.989: 99.2396% ( 4) 00:10:03.293 21328.989 - 21448.145: 99.2731% ( 4) 00:10:03.293 21448.145 - 21567.302: 99.2981% ( 3) 00:10:03.293 21567.302 - 21686.458: 99.3316% ( 4) 00:10:03.293 21686.458 - 21805.615: 99.3650% ( 4) 00:10:03.293 21805.615 - 21924.771: 99.3900% ( 3) 00:10:03.293 21924.771 - 22043.927: 99.4235% ( 4) 00:10:03.293 22043.927 - 22163.084: 99.4569% ( 4) 00:10:03.293 22163.084 - 22282.240: 99.4652% ( 1) 00:10:03.293 27763.433 - 27882.589: 99.4903% ( 3) 00:10:03.293 27882.589 - 28001.745: 99.5321% ( 5) 00:10:03.293 28001.745 - 28120.902: 99.5655% ( 4) 00:10:03.293 28120.902 - 28240.058: 99.6073% ( 5) 00:10:03.293 28240.058 - 28359.215: 99.6491% ( 5) 00:10:03.293 28359.215 - 28478.371: 99.6992% ( 6) 00:10:03.293 28478.371 - 28597.527: 99.7410% ( 5) 00:10:03.294 28597.527 - 28716.684: 99.7828% ( 5) 00:10:03.294 28716.684 - 28835.840: 99.8245% ( 5) 00:10:03.294 28835.840 - 28954.996: 99.8663% ( 5) 00:10:03.294 28954.996 - 29074.153: 99.9081% ( 5) 00:10:03.294 29074.153 - 29193.309: 99.9499% ( 5) 00:10:03.294 29193.309 - 29312.465: 99.9916% ( 5) 00:10:03.294 29312.465 - 29431.622: 100.0000% ( 1) 00:10:03.294 00:10:03.294 20:26:57 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:04.672 Initializing NVMe Controllers 00:10:04.672 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.672 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.672 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.672 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.672 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:04.672 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:04.672 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:04.672 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:04.672 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:04.672 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:04.672 Initialization complete. Launching workers. 00:10:04.672 ======================================================== 00:10:04.672 Latency(us) 00:10:04.672 Device Information : IOPS MiB/s Average min max 00:10:04.672 PCIE (0000:00:10.0) NSID 1 from core 0: 10039.58 117.65 12755.29 7986.96 39113.27 00:10:04.672 PCIE (0000:00:11.0) NSID 1 from core 0: 10039.58 117.65 12738.07 7446.75 37934.28 00:10:04.672 PCIE (0000:00:13.0) NSID 1 from core 0: 10039.58 117.65 12719.88 6239.60 38495.00 00:10:04.672 PCIE (0000:00:12.0) NSID 1 from core 0: 10039.58 117.65 12700.65 5856.92 37314.47 00:10:04.672 PCIE (0000:00:12.0) NSID 2 from core 0: 10103.52 118.40 12602.03 5567.19 28796.33 00:10:04.672 PCIE (0000:00:12.0) NSID 3 from core 0: 10103.52 118.40 12583.37 5192.77 28245.62 00:10:04.672 ======================================================== 00:10:04.672 Total : 60365.35 707.41 12683.02 5192.77 39113.27 00:10:04.672 00:10:04.672 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:04.672 ================================================================================= 00:10:04.672 1.00000% : 10068.713us 00:10:04.672 10.00000% : 11021.964us 00:10:04.672 25.00000% : 11498.589us 00:10:04.672 50.00000% : 12273.105us 00:10:04.672 75.00000% : 13702.982us 00:10:04.672 90.00000% : 14477.498us 00:10:04.672 95.00000% : 14894.545us 00:10:04.672 98.00000% : 15609.484us 00:10:04.672 99.00000% : 28478.371us 00:10:04.672 99.50000% : 37891.724us 00:10:04.672 99.90000% : 38844.975us 00:10:04.672 99.99000% : 39083.287us 00:10:04.672 99.99900% : 39321.600us 00:10:04.672 99.99990% : 39321.600us 00:10:04.672 99.99999% : 39321.600us 00:10:04.672 00:10:04.672 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:04.673 ================================================================================= 00:10:04.673 1.00000% : 10247.447us 00:10:04.673 10.00000% : 11081.542us 00:10:04.673 25.00000% : 11498.589us 00:10:04.673 50.00000% : 12273.105us 00:10:04.673 75.00000% : 13702.982us 00:10:04.673 90.00000% : 14358.342us 00:10:04.673 95.00000% : 14834.967us 00:10:04.673 98.00000% : 15609.484us 00:10:04.673 99.00000% : 27763.433us 00:10:04.673 99.50000% : 36938.473us 00:10:04.673 99.90000% : 37891.724us 00:10:04.673 99.99000% : 38130.036us 00:10:04.673 99.99900% : 38130.036us 00:10:04.673 99.99990% : 38130.036us 00:10:04.673 99.99999% : 38130.036us 00:10:04.673 00:10:04.673 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:04.673 ================================================================================= 00:10:04.673 1.00000% : 10247.447us 00:10:04.673 10.00000% : 11021.964us 00:10:04.673 25.00000% : 11439.011us 00:10:04.673 50.00000% : 12273.105us 00:10:04.673 75.00000% : 13702.982us 00:10:04.673 90.00000% : 14417.920us 00:10:04.673 95.00000% : 14834.967us 00:10:04.673 98.00000% : 15490.327us 00:10:04.673 99.00000% : 28240.058us 00:10:04.673 99.50000% : 37415.098us 00:10:04.673 99.90000% : 38368.349us 00:10:04.673 99.99000% : 38606.662us 00:10:04.673 99.99900% : 38606.662us 00:10:04.673 99.99990% : 38606.662us 00:10:04.673 99.99999% : 38606.662us 00:10:04.673 00:10:04.673 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:04.673 ================================================================================= 00:10:04.673 1.00000% : 10009.135us 00:10:04.673 10.00000% : 11021.964us 00:10:04.673 25.00000% : 11439.011us 00:10:04.673 50.00000% : 12213.527us 00:10:04.673 75.00000% : 13643.404us 00:10:04.673 90.00000% : 14358.342us 00:10:04.673 95.00000% : 14894.545us 00:10:04.673 98.00000% : 15490.327us 00:10:04.673 99.00000% : 27644.276us 00:10:04.673 99.50000% : 36223.535us 00:10:04.673 99.90000% : 37176.785us 00:10:04.673 99.99000% : 37415.098us 00:10:04.673 99.99900% : 37415.098us 00:10:04.673 99.99990% : 37415.098us 00:10:04.673 99.99999% : 37415.098us 00:10:04.673 00:10:04.673 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:04.673 ================================================================================= 00:10:04.673 1.00000% : 9532.509us 00:10:04.673 10.00000% : 11081.542us 00:10:04.673 25.00000% : 11498.589us 00:10:04.673 50.00000% : 12213.527us 00:10:04.673 75.00000% : 13702.982us 00:10:04.673 90.00000% : 14417.920us 00:10:04.673 95.00000% : 14894.545us 00:10:04.673 98.00000% : 15609.484us 00:10:04.673 99.00000% : 18945.862us 00:10:04.673 99.50000% : 27644.276us 00:10:04.673 99.90000% : 28597.527us 00:10:04.673 99.99000% : 28835.840us 00:10:04.673 99.99900% : 28835.840us 00:10:04.673 99.99990% : 28835.840us 00:10:04.673 99.99999% : 28835.840us 00:10:04.673 00:10:04.673 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:04.673 ================================================================================= 00:10:04.673 1.00000% : 9175.040us 00:10:04.673 10.00000% : 11081.542us 00:10:04.673 25.00000% : 11498.589us 00:10:04.673 50.00000% : 12213.527us 00:10:04.673 75.00000% : 13702.982us 00:10:04.673 90.00000% : 14358.342us 00:10:04.673 95.00000% : 14834.967us 00:10:04.673 98.00000% : 15609.484us 00:10:04.673 99.00000% : 18230.924us 00:10:04.673 99.50000% : 26929.338us 00:10:04.673 99.90000% : 28001.745us 00:10:04.673 99.99000% : 28240.058us 00:10:04.673 99.99900% : 28359.215us 00:10:04.673 99.99990% : 28359.215us 00:10:04.673 99.99999% : 28359.215us 00:10:04.673 00:10:04.673 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:04.673 ============================================================================== 00:10:04.673 Range in us Cumulative IO count 00:10:04.673 7983.476 - 8043.055: 0.1095% ( 11) 00:10:04.673 8043.055 - 8102.633: 0.2189% ( 11) 00:10:04.673 8102.633 - 8162.211: 0.2488% ( 3) 00:10:04.673 8221.789 - 8281.367: 0.2986% ( 5) 00:10:04.673 8281.367 - 8340.945: 0.3085% ( 1) 00:10:04.673 8340.945 - 8400.524: 0.3483% ( 4) 00:10:04.673 8400.524 - 8460.102: 0.3881% ( 4) 00:10:04.673 8460.102 - 8519.680: 0.4180% ( 3) 00:10:04.673 8519.680 - 8579.258: 0.4678% ( 5) 00:10:04.673 8579.258 - 8638.836: 0.5076% ( 4) 00:10:04.673 8638.836 - 8698.415: 0.5374% ( 3) 00:10:04.673 8698.415 - 8757.993: 0.5772% ( 4) 00:10:04.673 8757.993 - 8817.571: 0.6270% ( 5) 00:10:04.673 8817.571 - 8877.149: 0.6369% ( 1) 00:10:04.673 9830.400 - 9889.978: 0.6568% ( 2) 00:10:04.673 9889.978 - 9949.556: 0.7365% ( 8) 00:10:04.673 9949.556 - 10009.135: 0.8559% ( 12) 00:10:04.673 10009.135 - 10068.713: 1.0848% ( 23) 00:10:04.673 10068.713 - 10128.291: 1.2540% ( 17) 00:10:04.673 10128.291 - 10187.869: 1.4132% ( 16) 00:10:04.673 10187.869 - 10247.447: 1.6023% ( 19) 00:10:04.673 10247.447 - 10307.025: 2.0502% ( 45) 00:10:04.673 10307.025 - 10366.604: 2.7468% ( 70) 00:10:04.673 10366.604 - 10426.182: 3.3340% ( 59) 00:10:04.673 10426.182 - 10485.760: 3.7520% ( 42) 00:10:04.673 10485.760 - 10545.338: 4.2297% ( 48) 00:10:04.673 10545.338 - 10604.916: 4.6875% ( 46) 00:10:04.673 10604.916 - 10664.495: 5.4638% ( 78) 00:10:04.673 10664.495 - 10724.073: 6.1803% ( 72) 00:10:04.673 10724.073 - 10783.651: 6.9367% ( 76) 00:10:04.673 10783.651 - 10843.229: 7.9220% ( 99) 00:10:04.673 10843.229 - 10902.807: 8.8973% ( 98) 00:10:04.673 10902.807 - 10962.385: 9.9920% ( 110) 00:10:04.673 10962.385 - 11021.964: 11.3455% ( 136) 00:10:04.673 11021.964 - 11081.542: 12.9379% ( 160) 00:10:04.673 11081.542 - 11141.120: 14.2914% ( 136) 00:10:04.673 11141.120 - 11200.698: 15.9932% ( 171) 00:10:04.673 11200.698 - 11260.276: 17.7150% ( 173) 00:10:04.673 11260.276 - 11319.855: 19.5462% ( 184) 00:10:04.673 11319.855 - 11379.433: 21.6262% ( 209) 00:10:04.673 11379.433 - 11439.011: 23.7560% ( 214) 00:10:04.673 11439.011 - 11498.589: 25.8459% ( 210) 00:10:04.673 11498.589 - 11558.167: 27.7966% ( 196) 00:10:04.673 11558.167 - 11617.745: 29.6676% ( 188) 00:10:04.673 11617.745 - 11677.324: 31.5784% ( 192) 00:10:04.673 11677.324 - 11736.902: 33.5888% ( 202) 00:10:04.673 11736.902 - 11796.480: 35.8579% ( 228) 00:10:04.673 11796.480 - 11856.058: 37.8384% ( 199) 00:10:04.673 11856.058 - 11915.636: 40.1572% ( 233) 00:10:04.673 11915.636 - 11975.215: 42.2273% ( 208) 00:10:04.673 11975.215 - 12034.793: 44.4268% ( 221) 00:10:04.673 12034.793 - 12094.371: 46.1385% ( 172) 00:10:04.673 12094.371 - 12153.949: 47.9598% ( 183) 00:10:04.673 12153.949 - 12213.527: 49.6119% ( 166) 00:10:04.673 12213.527 - 12273.105: 51.1246% ( 152) 00:10:04.673 12273.105 - 12332.684: 52.5179% ( 140) 00:10:04.673 12332.684 - 12392.262: 53.6326% ( 112) 00:10:04.673 12392.262 - 12451.840: 54.6576% ( 103) 00:10:04.673 12451.840 - 12511.418: 55.6429% ( 99) 00:10:04.673 12511.418 - 12570.996: 56.5088% ( 87) 00:10:04.673 12570.996 - 12630.575: 57.4343% ( 93) 00:10:04.673 12630.575 - 12690.153: 58.5987% ( 117) 00:10:04.673 12690.153 - 12749.731: 59.7532% ( 116) 00:10:04.673 12749.731 - 12809.309: 60.8977% ( 115) 00:10:04.673 12809.309 - 12868.887: 62.0322% ( 114) 00:10:04.673 12868.887 - 12928.465: 63.1071% ( 108) 00:10:04.673 12928.465 - 12988.044: 63.9928% ( 89) 00:10:04.673 12988.044 - 13047.622: 64.8189% ( 83) 00:10:04.673 13047.622 - 13107.200: 65.8041% ( 99) 00:10:04.673 13107.200 - 13166.778: 66.7197% ( 92) 00:10:04.673 13166.778 - 13226.356: 67.4064% ( 69) 00:10:04.673 13226.356 - 13285.935: 68.1728% ( 77) 00:10:04.673 13285.935 - 13345.513: 69.5064% ( 134) 00:10:04.673 13345.513 - 13405.091: 70.9196% ( 142) 00:10:04.673 13405.091 - 13464.669: 71.9049% ( 99) 00:10:04.673 13464.669 - 13524.247: 72.7707% ( 87) 00:10:04.673 13524.247 - 13583.825: 73.7560% ( 99) 00:10:04.673 13583.825 - 13643.404: 74.8706% ( 112) 00:10:04.673 13643.404 - 13702.982: 76.1445% ( 128) 00:10:04.673 13702.982 - 13762.560: 77.3985% ( 126) 00:10:04.673 13762.560 - 13822.138: 78.5928% ( 120) 00:10:04.673 13822.138 - 13881.716: 79.6377% ( 105) 00:10:04.673 13881.716 - 13941.295: 80.7126% ( 108) 00:10:04.673 13941.295 - 14000.873: 81.9566% ( 125) 00:10:04.673 14000.873 - 14060.451: 83.0613% ( 111) 00:10:04.673 14060.451 - 14120.029: 84.1959% ( 114) 00:10:04.673 14120.029 - 14179.607: 85.3901% ( 120) 00:10:04.673 14179.607 - 14239.185: 86.3854% ( 100) 00:10:04.673 14239.185 - 14298.764: 87.3010% ( 92) 00:10:04.673 14298.764 - 14358.342: 88.2763% ( 98) 00:10:04.673 14358.342 - 14417.920: 89.2018% ( 93) 00:10:04.673 14417.920 - 14477.498: 90.0876% ( 89) 00:10:04.673 14477.498 - 14537.076: 91.2122% ( 113) 00:10:04.673 14537.076 - 14596.655: 91.9287% ( 72) 00:10:04.673 14596.655 - 14656.233: 92.7349% ( 81) 00:10:04.673 14656.233 - 14715.811: 93.4116% ( 68) 00:10:04.673 14715.811 - 14775.389: 93.9988% ( 59) 00:10:04.673 14775.389 - 14834.967: 94.4566% ( 46) 00:10:04.673 14834.967 - 14894.545: 95.0637% ( 61) 00:10:04.673 14894.545 - 14954.124: 95.5314% ( 47) 00:10:04.673 14954.124 - 15013.702: 95.8698% ( 34) 00:10:04.673 15013.702 - 15073.280: 96.2381% ( 37) 00:10:04.673 15073.280 - 15132.858: 96.5665% ( 33) 00:10:04.673 15132.858 - 15192.436: 96.8451% ( 28) 00:10:04.673 15192.436 - 15252.015: 97.0840% ( 24) 00:10:04.673 15252.015 - 15371.171: 97.5119% ( 43) 00:10:04.673 15371.171 - 15490.327: 97.9100% ( 40) 00:10:04.673 15490.327 - 15609.484: 98.1688% ( 26) 00:10:04.673 15609.484 - 15728.640: 98.3678% ( 20) 00:10:04.673 15728.640 - 15847.796: 98.5072% ( 14) 00:10:04.673 15847.796 - 15966.953: 98.5868% ( 8) 00:10:04.673 15966.953 - 16086.109: 98.6365% ( 5) 00:10:04.673 16086.109 - 16205.265: 98.6863% ( 5) 00:10:04.673 16205.265 - 16324.422: 98.7261% ( 4) 00:10:04.673 27525.120 - 27644.276: 98.7560% ( 3) 00:10:04.673 27644.276 - 27763.433: 98.8057% ( 5) 00:10:04.673 27763.433 - 27882.589: 98.8356% ( 3) 00:10:04.673 27882.589 - 28001.745: 98.8754% ( 4) 00:10:04.673 28001.745 - 28120.902: 98.9152% ( 4) 00:10:04.673 28120.902 - 28240.058: 98.9451% ( 3) 00:10:04.673 28240.058 - 28359.215: 98.9948% ( 5) 00:10:04.673 28359.215 - 28478.371: 99.0247% ( 3) 00:10:04.674 28478.371 - 28597.527: 99.0545% ( 3) 00:10:04.674 28597.527 - 28716.684: 99.0943% ( 4) 00:10:04.674 28716.684 - 28835.840: 99.1342% ( 4) 00:10:04.674 28835.840 - 28954.996: 99.1640% ( 3) 00:10:04.674 28954.996 - 29074.153: 99.2038% ( 4) 00:10:04.674 29074.153 - 29193.309: 99.2536% ( 5) 00:10:04.674 29193.309 - 29312.465: 99.2735% ( 2) 00:10:04.674 29312.465 - 29431.622: 99.3133% ( 4) 00:10:04.674 29431.622 - 29550.778: 99.3631% ( 5) 00:10:04.674 37176.785 - 37415.098: 99.3830% ( 2) 00:10:04.674 37415.098 - 37653.411: 99.4825% ( 10) 00:10:04.674 37653.411 - 37891.724: 99.5621% ( 8) 00:10:04.674 37891.724 - 38130.036: 99.6517% ( 9) 00:10:04.674 38130.036 - 38368.349: 99.7313% ( 8) 00:10:04.674 38368.349 - 38606.662: 99.8209% ( 9) 00:10:04.674 38606.662 - 38844.975: 99.9005% ( 8) 00:10:04.674 38844.975 - 39083.287: 99.9900% ( 9) 00:10:04.674 39083.287 - 39321.600: 100.0000% ( 1) 00:10:04.674 00:10:04.674 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:04.674 ============================================================================== 00:10:04.674 Range in us Cumulative IO count 00:10:04.674 7417.484 - 7447.273: 0.0100% ( 1) 00:10:04.674 7447.273 - 7477.062: 0.0398% ( 3) 00:10:04.674 7477.062 - 7506.851: 0.0995% ( 6) 00:10:04.674 7506.851 - 7536.640: 0.1493% ( 5) 00:10:04.674 7536.640 - 7566.429: 0.1990% ( 5) 00:10:04.674 7566.429 - 7596.218: 0.2687% ( 7) 00:10:04.674 7596.218 - 7626.007: 0.3384% ( 7) 00:10:04.674 7626.007 - 7685.585: 0.4279% ( 9) 00:10:04.674 7685.585 - 7745.164: 0.4976% ( 7) 00:10:04.674 7745.164 - 7804.742: 0.5573% ( 6) 00:10:04.674 7804.742 - 7864.320: 0.5971% ( 4) 00:10:04.674 7864.320 - 7923.898: 0.6270% ( 3) 00:10:04.674 7923.898 - 7983.476: 0.6369% ( 1) 00:10:04.674 9949.556 - 10009.135: 0.6668% ( 3) 00:10:04.674 10009.135 - 10068.713: 0.7365% ( 7) 00:10:04.674 10068.713 - 10128.291: 0.8061% ( 7) 00:10:04.674 10128.291 - 10187.869: 0.8957% ( 9) 00:10:04.674 10187.869 - 10247.447: 1.0151% ( 12) 00:10:04.674 10247.447 - 10307.025: 1.2142% ( 20) 00:10:04.674 10307.025 - 10366.604: 1.4928% ( 28) 00:10:04.674 10366.604 - 10426.182: 1.8213% ( 33) 00:10:04.674 10426.182 - 10485.760: 2.3885% ( 57) 00:10:04.674 10485.760 - 10545.338: 2.8762% ( 49) 00:10:04.674 10545.338 - 10604.916: 3.4932% ( 62) 00:10:04.674 10604.916 - 10664.495: 4.1998% ( 71) 00:10:04.674 10664.495 - 10724.073: 4.8865% ( 69) 00:10:04.674 10724.073 - 10783.651: 5.5533% ( 67) 00:10:04.674 10783.651 - 10843.229: 6.4092% ( 86) 00:10:04.674 10843.229 - 10902.807: 7.2054% ( 80) 00:10:04.674 10902.807 - 10962.385: 8.5191% ( 132) 00:10:04.674 10962.385 - 11021.964: 9.6338% ( 112) 00:10:04.674 11021.964 - 11081.542: 11.0967% ( 147) 00:10:04.674 11081.542 - 11141.120: 12.6493% ( 156) 00:10:04.674 11141.120 - 11200.698: 14.4009% ( 176) 00:10:04.674 11200.698 - 11260.276: 16.2918% ( 190) 00:10:04.674 11260.276 - 11319.855: 18.5908% ( 231) 00:10:04.674 11319.855 - 11379.433: 21.0689% ( 249) 00:10:04.674 11379.433 - 11439.011: 23.5470% ( 249) 00:10:04.674 11439.011 - 11498.589: 25.8857% ( 235) 00:10:04.674 11498.589 - 11558.167: 28.0553% ( 218) 00:10:04.674 11558.167 - 11617.745: 30.4140% ( 237) 00:10:04.674 11617.745 - 11677.324: 32.7329% ( 233) 00:10:04.674 11677.324 - 11736.902: 35.2010% ( 248) 00:10:04.674 11736.902 - 11796.480: 37.4502% ( 226) 00:10:04.674 11796.480 - 11856.058: 39.6994% ( 226) 00:10:04.674 11856.058 - 11915.636: 41.4709% ( 178) 00:10:04.674 11915.636 - 11975.215: 43.1728% ( 171) 00:10:04.674 11975.215 - 12034.793: 44.5561% ( 139) 00:10:04.674 12034.793 - 12094.371: 46.3177% ( 177) 00:10:04.674 12094.371 - 12153.949: 47.7110% ( 140) 00:10:04.674 12153.949 - 12213.527: 48.9948% ( 129) 00:10:04.674 12213.527 - 12273.105: 50.4777% ( 149) 00:10:04.674 12273.105 - 12332.684: 52.2990% ( 183) 00:10:04.674 12332.684 - 12392.262: 53.7520% ( 146) 00:10:04.674 12392.262 - 12451.840: 55.1254% ( 138) 00:10:04.674 12451.840 - 12511.418: 56.3495% ( 123) 00:10:04.674 12511.418 - 12570.996: 57.9319% ( 159) 00:10:04.674 12570.996 - 12630.575: 59.2158% ( 129) 00:10:04.674 12630.575 - 12690.153: 60.2707% ( 106) 00:10:04.674 12690.153 - 12749.731: 61.3157% ( 105) 00:10:04.674 12749.731 - 12809.309: 62.1815% ( 87) 00:10:04.674 12809.309 - 12868.887: 63.1071% ( 93) 00:10:04.674 12868.887 - 12928.465: 64.0824% ( 98) 00:10:04.674 12928.465 - 12988.044: 64.9283% ( 85) 00:10:04.674 12988.044 - 13047.622: 65.6350% ( 71) 00:10:04.674 13047.622 - 13107.200: 66.1823% ( 55) 00:10:04.674 13107.200 - 13166.778: 66.8790% ( 70) 00:10:04.674 13166.778 - 13226.356: 67.5159% ( 64) 00:10:04.674 13226.356 - 13285.935: 68.0135% ( 50) 00:10:04.674 13285.935 - 13345.513: 68.5111% ( 50) 00:10:04.674 13345.513 - 13405.091: 69.0685% ( 56) 00:10:04.674 13405.091 - 13464.669: 69.9542% ( 89) 00:10:04.674 13464.669 - 13524.247: 71.0490% ( 110) 00:10:04.674 13524.247 - 13583.825: 72.4224% ( 138) 00:10:04.674 13583.825 - 13643.404: 73.8157% ( 140) 00:10:04.674 13643.404 - 13702.982: 75.1493% ( 134) 00:10:04.674 13702.982 - 13762.560: 76.5824% ( 144) 00:10:04.674 13762.560 - 13822.138: 78.1449% ( 157) 00:10:04.674 13822.138 - 13881.716: 79.5084% ( 137) 00:10:04.674 13881.716 - 13941.295: 80.7126% ( 121) 00:10:04.674 13941.295 - 14000.873: 82.0362% ( 133) 00:10:04.674 14000.873 - 14060.451: 83.5490% ( 152) 00:10:04.674 14060.451 - 14120.029: 85.1214% ( 158) 00:10:04.674 14120.029 - 14179.607: 86.4948% ( 138) 00:10:04.674 14179.607 - 14239.185: 87.8085% ( 132) 00:10:04.674 14239.185 - 14298.764: 89.0625% ( 126) 00:10:04.674 14298.764 - 14358.342: 90.1174% ( 106) 00:10:04.674 14358.342 - 14417.920: 91.2022% ( 109) 00:10:04.674 14417.920 - 14477.498: 92.0880% ( 89) 00:10:04.674 14477.498 - 14537.076: 92.7548% ( 67) 00:10:04.674 14537.076 - 14596.655: 93.2922% ( 54) 00:10:04.674 14596.655 - 14656.233: 93.7102% ( 42) 00:10:04.674 14656.233 - 14715.811: 94.2178% ( 51) 00:10:04.674 14715.811 - 14775.389: 94.6258% ( 41) 00:10:04.674 14775.389 - 14834.967: 95.1334% ( 51) 00:10:04.674 14834.967 - 14894.545: 95.5713% ( 44) 00:10:04.674 14894.545 - 14954.124: 95.9693% ( 40) 00:10:04.674 14954.124 - 15013.702: 96.2580% ( 29) 00:10:04.674 15013.702 - 15073.280: 96.5068% ( 25) 00:10:04.674 15073.280 - 15132.858: 96.7954% ( 29) 00:10:04.674 15132.858 - 15192.436: 97.0342% ( 24) 00:10:04.674 15192.436 - 15252.015: 97.2432% ( 21) 00:10:04.674 15252.015 - 15371.171: 97.6015% ( 36) 00:10:04.674 15371.171 - 15490.327: 97.9200% ( 32) 00:10:04.674 15490.327 - 15609.484: 98.2285% ( 31) 00:10:04.674 15609.484 - 15728.640: 98.4076% ( 18) 00:10:04.674 15728.640 - 15847.796: 98.4972% ( 9) 00:10:04.674 15847.796 - 15966.953: 98.5669% ( 7) 00:10:04.674 15966.953 - 16086.109: 98.6465% ( 8) 00:10:04.674 16086.109 - 16205.265: 98.6863% ( 4) 00:10:04.674 16205.265 - 16324.422: 98.7162% ( 3) 00:10:04.674 16324.422 - 16443.578: 98.7261% ( 1) 00:10:04.674 26929.338 - 27048.495: 98.7659% ( 4) 00:10:04.674 27048.495 - 27167.651: 98.8057% ( 4) 00:10:04.674 27167.651 - 27286.807: 98.8555% ( 5) 00:10:04.674 27286.807 - 27405.964: 98.9053% ( 5) 00:10:04.674 27405.964 - 27525.120: 98.9451% ( 4) 00:10:04.674 27525.120 - 27644.276: 98.9948% ( 5) 00:10:04.674 27644.276 - 27763.433: 99.0346% ( 4) 00:10:04.674 27763.433 - 27882.589: 99.0744% ( 4) 00:10:04.674 27882.589 - 28001.745: 99.1143% ( 4) 00:10:04.674 28001.745 - 28120.902: 99.1541% ( 4) 00:10:04.674 28120.902 - 28240.058: 99.1939% ( 4) 00:10:04.674 28240.058 - 28359.215: 99.2436% ( 5) 00:10:04.674 28359.215 - 28478.371: 99.2934% ( 5) 00:10:04.674 28478.371 - 28597.527: 99.3332% ( 4) 00:10:04.674 28597.527 - 28716.684: 99.3631% ( 3) 00:10:04.674 36223.535 - 36461.847: 99.3730% ( 1) 00:10:04.674 36461.847 - 36700.160: 99.4725% ( 10) 00:10:04.674 36700.160 - 36938.473: 99.5721% ( 10) 00:10:04.674 36938.473 - 37176.785: 99.6616% ( 9) 00:10:04.674 37176.785 - 37415.098: 99.7711% ( 11) 00:10:04.674 37415.098 - 37653.411: 99.8806% ( 11) 00:10:04.674 37653.411 - 37891.724: 99.9801% ( 10) 00:10:04.674 37891.724 - 38130.036: 100.0000% ( 2) 00:10:04.674 00:10:04.674 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:04.674 ============================================================================== 00:10:04.674 Range in us Cumulative IO count 00:10:04.674 6225.920 - 6255.709: 0.0299% ( 3) 00:10:04.674 6255.709 - 6285.498: 0.0995% ( 7) 00:10:04.674 6285.498 - 6315.287: 0.1592% ( 6) 00:10:04.674 6315.287 - 6345.076: 0.2886% ( 13) 00:10:04.674 6345.076 - 6374.865: 0.4180% ( 13) 00:10:04.674 6374.865 - 6404.655: 0.4578% ( 4) 00:10:04.674 6404.655 - 6434.444: 0.4777% ( 2) 00:10:04.674 6434.444 - 6464.233: 0.4976% ( 2) 00:10:04.674 6464.233 - 6494.022: 0.5175% ( 2) 00:10:04.674 6494.022 - 6523.811: 0.5374% ( 2) 00:10:04.674 6523.811 - 6553.600: 0.5573% ( 2) 00:10:04.674 6553.600 - 6583.389: 0.5673% ( 1) 00:10:04.674 6583.389 - 6613.178: 0.5872% ( 2) 00:10:04.674 6613.178 - 6642.967: 0.6170% ( 3) 00:10:04.674 6642.967 - 6672.756: 0.6270% ( 1) 00:10:04.674 6672.756 - 6702.545: 0.6369% ( 1) 00:10:04.674 9949.556 - 10009.135: 0.6668% ( 3) 00:10:04.674 10009.135 - 10068.713: 0.7365% ( 7) 00:10:04.674 10068.713 - 10128.291: 0.8061% ( 7) 00:10:04.674 10128.291 - 10187.869: 0.9355% ( 13) 00:10:04.674 10187.869 - 10247.447: 1.0350% ( 10) 00:10:04.674 10247.447 - 10307.025: 1.3137% ( 28) 00:10:04.674 10307.025 - 10366.604: 1.4729% ( 16) 00:10:04.674 10366.604 - 10426.182: 1.6819% ( 21) 00:10:04.674 10426.182 - 10485.760: 2.0104% ( 33) 00:10:04.674 10485.760 - 10545.338: 2.4383% ( 43) 00:10:04.674 10545.338 - 10604.916: 3.1150% ( 68) 00:10:04.674 10604.916 - 10664.495: 4.0008% ( 89) 00:10:04.674 10664.495 - 10724.073: 4.6875% ( 69) 00:10:04.674 10724.073 - 10783.651: 5.4737% ( 79) 00:10:04.674 10783.651 - 10843.229: 6.6779% ( 121) 00:10:04.674 10843.229 - 10902.807: 7.8822% ( 121) 00:10:04.674 10902.807 - 10962.385: 9.1859% ( 131) 00:10:04.674 10962.385 - 11021.964: 10.7086% ( 153) 00:10:04.674 11021.964 - 11081.542: 12.2512% ( 155) 00:10:04.674 11081.542 - 11141.120: 13.9232% ( 168) 00:10:04.675 11141.120 - 11200.698: 16.2520% ( 234) 00:10:04.675 11200.698 - 11260.276: 18.4116% ( 217) 00:10:04.675 11260.276 - 11319.855: 20.7205% ( 232) 00:10:04.675 11319.855 - 11379.433: 22.9299% ( 222) 00:10:04.675 11379.433 - 11439.011: 25.2787% ( 236) 00:10:04.675 11439.011 - 11498.589: 27.3587% ( 209) 00:10:04.675 11498.589 - 11558.167: 29.5880% ( 224) 00:10:04.675 11558.167 - 11617.745: 31.8869% ( 231) 00:10:04.675 11617.745 - 11677.324: 33.8077% ( 193) 00:10:04.675 11677.324 - 11736.902: 35.9475% ( 215) 00:10:04.675 11736.902 - 11796.480: 38.1568% ( 222) 00:10:04.675 11796.480 - 11856.058: 39.9482% ( 180) 00:10:04.675 11856.058 - 11915.636: 41.7098% ( 177) 00:10:04.675 11915.636 - 11975.215: 43.4813% ( 178) 00:10:04.675 11975.215 - 12034.793: 45.2428% ( 177) 00:10:04.675 12034.793 - 12094.371: 46.8053% ( 157) 00:10:04.675 12094.371 - 12153.949: 48.1986% ( 140) 00:10:04.675 12153.949 - 12213.527: 49.8010% ( 161) 00:10:04.675 12213.527 - 12273.105: 51.2838% ( 149) 00:10:04.675 12273.105 - 12332.684: 52.7170% ( 144) 00:10:04.675 12332.684 - 12392.262: 54.1600% ( 145) 00:10:04.675 12392.262 - 12451.840: 55.5334% ( 138) 00:10:04.675 12451.840 - 12511.418: 56.7476% ( 122) 00:10:04.675 12511.418 - 12570.996: 57.7229% ( 98) 00:10:04.675 12570.996 - 12630.575: 58.8575% ( 114) 00:10:04.675 12630.575 - 12690.153: 59.9124% ( 106) 00:10:04.675 12690.153 - 12749.731: 60.7584% ( 85) 00:10:04.675 12749.731 - 12809.309: 61.5247% ( 77) 00:10:04.675 12809.309 - 12868.887: 62.2512% ( 73) 00:10:04.675 12868.887 - 12928.465: 63.1469% ( 90) 00:10:04.675 12928.465 - 12988.044: 63.9928% ( 85) 00:10:04.675 12988.044 - 13047.622: 64.6696% ( 68) 00:10:04.675 13047.622 - 13107.200: 65.4956% ( 83) 00:10:04.675 13107.200 - 13166.778: 66.4013% ( 91) 00:10:04.675 13166.778 - 13226.356: 67.2572% ( 86) 00:10:04.675 13226.356 - 13285.935: 67.9837% ( 73) 00:10:04.675 13285.935 - 13345.513: 68.9889% ( 101) 00:10:04.675 13345.513 - 13405.091: 69.8646% ( 88) 00:10:04.675 13405.091 - 13464.669: 71.0490% ( 119) 00:10:04.675 13464.669 - 13524.247: 72.1636% ( 112) 00:10:04.675 13524.247 - 13583.825: 73.3778% ( 122) 00:10:04.675 13583.825 - 13643.404: 74.8010% ( 143) 00:10:04.675 13643.404 - 13702.982: 76.2540% ( 146) 00:10:04.675 13702.982 - 13762.560: 77.5677% ( 132) 00:10:04.675 13762.560 - 13822.138: 78.9212% ( 136) 00:10:04.675 13822.138 - 13881.716: 80.2349% ( 132) 00:10:04.675 13881.716 - 13941.295: 81.4590% ( 123) 00:10:04.675 13941.295 - 14000.873: 82.6931% ( 124) 00:10:04.675 14000.873 - 14060.451: 84.0068% ( 132) 00:10:04.675 14060.451 - 14120.029: 85.4299% ( 143) 00:10:04.675 14120.029 - 14179.607: 86.7038% ( 128) 00:10:04.675 14179.607 - 14239.185: 87.6592% ( 96) 00:10:04.675 14239.185 - 14298.764: 88.8137% ( 116) 00:10:04.675 14298.764 - 14358.342: 89.9184% ( 111) 00:10:04.675 14358.342 - 14417.920: 90.8738% ( 96) 00:10:04.675 14417.920 - 14477.498: 91.6501% ( 78) 00:10:04.675 14477.498 - 14537.076: 92.2870% ( 64) 00:10:04.675 14537.076 - 14596.655: 93.0036% ( 72) 00:10:04.675 14596.655 - 14656.233: 93.6206% ( 62) 00:10:04.675 14656.233 - 14715.811: 94.1879% ( 57) 00:10:04.675 14715.811 - 14775.389: 94.6158% ( 43) 00:10:04.675 14775.389 - 14834.967: 95.0338% ( 42) 00:10:04.675 14834.967 - 14894.545: 95.3722% ( 34) 00:10:04.675 14894.545 - 14954.124: 95.6907% ( 32) 00:10:04.675 14954.124 - 15013.702: 96.0291% ( 34) 00:10:04.675 15013.702 - 15073.280: 96.3276% ( 30) 00:10:04.675 15073.280 - 15132.858: 96.6162% ( 29) 00:10:04.675 15132.858 - 15192.436: 96.8750% ( 26) 00:10:04.675 15192.436 - 15252.015: 97.1736% ( 30) 00:10:04.675 15252.015 - 15371.171: 97.7906% ( 62) 00:10:04.675 15371.171 - 15490.327: 98.1190% ( 33) 00:10:04.675 15490.327 - 15609.484: 98.3877% ( 27) 00:10:04.675 15609.484 - 15728.640: 98.5669% ( 18) 00:10:04.675 15728.640 - 15847.796: 98.6564% ( 9) 00:10:04.675 15847.796 - 15966.953: 98.7162% ( 6) 00:10:04.675 15966.953 - 16086.109: 98.7261% ( 1) 00:10:04.675 27405.964 - 27525.120: 98.7560% ( 3) 00:10:04.675 27525.120 - 27644.276: 98.7958% ( 4) 00:10:04.675 27644.276 - 27763.433: 98.8356% ( 4) 00:10:04.675 27763.433 - 27882.589: 98.8854% ( 5) 00:10:04.675 27882.589 - 28001.745: 98.9152% ( 3) 00:10:04.675 28001.745 - 28120.902: 98.9550% ( 4) 00:10:04.675 28120.902 - 28240.058: 99.0048% ( 5) 00:10:04.675 28240.058 - 28359.215: 99.0446% ( 4) 00:10:04.675 28359.215 - 28478.371: 99.0844% ( 4) 00:10:04.675 28478.371 - 28597.527: 99.1242% ( 4) 00:10:04.675 28597.527 - 28716.684: 99.1740% ( 5) 00:10:04.675 28716.684 - 28835.840: 99.2138% ( 4) 00:10:04.675 28835.840 - 28954.996: 99.2536% ( 4) 00:10:04.675 28954.996 - 29074.153: 99.3033% ( 5) 00:10:04.675 29074.153 - 29193.309: 99.3432% ( 4) 00:10:04.675 29193.309 - 29312.465: 99.3631% ( 2) 00:10:04.675 36700.160 - 36938.473: 99.4029% ( 4) 00:10:04.675 36938.473 - 37176.785: 99.4825% ( 8) 00:10:04.675 37176.785 - 37415.098: 99.5621% ( 8) 00:10:04.675 37415.098 - 37653.411: 99.6616% ( 10) 00:10:04.675 37653.411 - 37891.724: 99.7611% ( 10) 00:10:04.675 37891.724 - 38130.036: 99.8507% ( 9) 00:10:04.675 38130.036 - 38368.349: 99.9403% ( 9) 00:10:04.675 38368.349 - 38606.662: 100.0000% ( 6) 00:10:04.675 00:10:04.675 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:04.675 ============================================================================== 00:10:04.675 Range in us Cumulative IO count 00:10:04.675 5838.662 - 5868.451: 0.0100% ( 1) 00:10:04.675 5868.451 - 5898.240: 0.0498% ( 4) 00:10:04.675 5898.240 - 5928.029: 0.0896% ( 4) 00:10:04.675 5928.029 - 5957.818: 0.1493% ( 6) 00:10:04.675 5957.818 - 5987.607: 0.1990% ( 5) 00:10:04.675 5987.607 - 6017.396: 0.2488% ( 5) 00:10:04.675 6017.396 - 6047.185: 0.3284% ( 8) 00:10:04.675 6047.185 - 6076.975: 0.3981% ( 7) 00:10:04.675 6076.975 - 6106.764: 0.4479% ( 5) 00:10:04.675 6106.764 - 6136.553: 0.4678% ( 2) 00:10:04.675 6136.553 - 6166.342: 0.4976% ( 3) 00:10:04.675 6166.342 - 6196.131: 0.5374% ( 4) 00:10:04.675 6196.131 - 6225.920: 0.5573% ( 2) 00:10:04.675 6225.920 - 6255.709: 0.5673% ( 1) 00:10:04.675 6255.709 - 6285.498: 0.5872% ( 2) 00:10:04.675 6285.498 - 6315.287: 0.6071% ( 2) 00:10:04.675 6315.287 - 6345.076: 0.6170% ( 1) 00:10:04.675 6374.865 - 6404.655: 0.6270% ( 1) 00:10:04.675 6404.655 - 6434.444: 0.6369% ( 1) 00:10:04.675 9651.665 - 9711.244: 0.6469% ( 1) 00:10:04.675 9770.822 - 9830.400: 0.6668% ( 2) 00:10:04.675 9830.400 - 9889.978: 0.7763% ( 11) 00:10:04.675 9889.978 - 9949.556: 0.9156% ( 14) 00:10:04.675 9949.556 - 10009.135: 1.0450% ( 13) 00:10:04.675 10009.135 - 10068.713: 1.1246% ( 8) 00:10:04.675 10068.713 - 10128.291: 1.1545% ( 3) 00:10:04.675 10128.291 - 10187.869: 1.1943% ( 4) 00:10:04.675 10187.869 - 10247.447: 1.2838% ( 9) 00:10:04.675 10247.447 - 10307.025: 1.4132% ( 13) 00:10:04.675 10307.025 - 10366.604: 1.5525% ( 14) 00:10:04.675 10366.604 - 10426.182: 1.7217% ( 17) 00:10:04.675 10426.182 - 10485.760: 2.0203% ( 30) 00:10:04.675 10485.760 - 10545.338: 2.6075% ( 59) 00:10:04.675 10545.338 - 10604.916: 3.1748% ( 57) 00:10:04.675 10604.916 - 10664.495: 3.9411% ( 77) 00:10:04.675 10664.495 - 10724.073: 4.6477% ( 71) 00:10:04.675 10724.073 - 10783.651: 5.8221% ( 118) 00:10:04.675 10783.651 - 10843.229: 7.0163% ( 120) 00:10:04.675 10843.229 - 10902.807: 8.1907% ( 118) 00:10:04.675 10902.807 - 10962.385: 9.4347% ( 125) 00:10:04.675 10962.385 - 11021.964: 10.7683% ( 134) 00:10:04.675 11021.964 - 11081.542: 12.1716% ( 141) 00:10:04.675 11081.542 - 11141.120: 14.1023% ( 194) 00:10:04.675 11141.120 - 11200.698: 16.0231% ( 193) 00:10:04.675 11200.698 - 11260.276: 18.6903% ( 268) 00:10:04.675 11260.276 - 11319.855: 20.5912% ( 191) 00:10:04.675 11319.855 - 11379.433: 22.8901% ( 231) 00:10:04.675 11379.433 - 11439.011: 25.1791% ( 230) 00:10:04.675 11439.011 - 11498.589: 27.2492% ( 208) 00:10:04.675 11498.589 - 11558.167: 29.2197% ( 198) 00:10:04.675 11558.167 - 11617.745: 31.0410% ( 183) 00:10:04.675 11617.745 - 11677.324: 32.8822% ( 185) 00:10:04.675 11677.324 - 11736.902: 34.6338% ( 176) 00:10:04.675 11736.902 - 11796.480: 36.2858% ( 166) 00:10:04.675 11796.480 - 11856.058: 38.3758% ( 210) 00:10:04.675 11856.058 - 11915.636: 40.7146% ( 235) 00:10:04.675 11915.636 - 11975.215: 42.7150% ( 201) 00:10:04.675 11975.215 - 12034.793: 44.4566% ( 175) 00:10:04.675 12034.793 - 12094.371: 46.2679% ( 182) 00:10:04.675 12094.371 - 12153.949: 48.2783% ( 202) 00:10:04.675 12153.949 - 12213.527: 50.0697% ( 180) 00:10:04.675 12213.527 - 12273.105: 51.6123% ( 155) 00:10:04.675 12273.105 - 12332.684: 53.0653% ( 146) 00:10:04.675 12332.684 - 12392.262: 54.3690% ( 131) 00:10:04.675 12392.262 - 12451.840: 55.7325% ( 137) 00:10:04.675 12451.840 - 12511.418: 56.8869% ( 116) 00:10:04.675 12511.418 - 12570.996: 58.1509% ( 127) 00:10:04.675 12570.996 - 12630.575: 59.3352% ( 119) 00:10:04.675 12630.575 - 12690.153: 60.3304% ( 100) 00:10:04.675 12690.153 - 12749.731: 61.5048% ( 118) 00:10:04.675 12749.731 - 12809.309: 62.7787% ( 128) 00:10:04.675 12809.309 - 12868.887: 63.7540% ( 98) 00:10:04.675 12868.887 - 12928.465: 64.5402% ( 79) 00:10:04.675 12928.465 - 12988.044: 65.2369% ( 70) 00:10:04.675 12988.044 - 13047.622: 66.0231% ( 79) 00:10:04.675 13047.622 - 13107.200: 66.4908% ( 47) 00:10:04.675 13107.200 - 13166.778: 67.0979% ( 61) 00:10:04.675 13166.778 - 13226.356: 67.6553% ( 56) 00:10:04.675 13226.356 - 13285.935: 68.4116% ( 76) 00:10:04.675 13285.935 - 13345.513: 69.2974% ( 89) 00:10:04.675 13345.513 - 13405.091: 70.3523% ( 106) 00:10:04.675 13405.091 - 13464.669: 71.5864% ( 124) 00:10:04.675 13464.669 - 13524.247: 72.8901% ( 131) 00:10:04.675 13524.247 - 13583.825: 74.2237% ( 134) 00:10:04.675 13583.825 - 13643.404: 75.3682% ( 115) 00:10:04.675 13643.404 - 13702.982: 76.7018% ( 134) 00:10:04.675 13702.982 - 13762.560: 77.8563% ( 116) 00:10:04.675 13762.560 - 13822.138: 79.0008% ( 115) 00:10:04.675 13822.138 - 13881.716: 80.1254% ( 113) 00:10:04.675 13881.716 - 13941.295: 81.2799% ( 116) 00:10:04.675 13941.295 - 14000.873: 82.4940% ( 122) 00:10:04.675 14000.873 - 14060.451: 83.7779% ( 129) 00:10:04.675 14060.451 - 14120.029: 85.0418% ( 127) 00:10:04.675 14120.029 - 14179.607: 86.3455% ( 131) 00:10:04.675 14179.607 - 14239.185: 87.5896% ( 125) 00:10:04.676 14239.185 - 14298.764: 88.9033% ( 132) 00:10:04.676 14298.764 - 14358.342: 90.0876% ( 119) 00:10:04.676 14358.342 - 14417.920: 91.1027% ( 102) 00:10:04.676 14417.920 - 14477.498: 91.9088% ( 81) 00:10:04.676 14477.498 - 14537.076: 92.6254% ( 72) 00:10:04.676 14537.076 - 14596.655: 93.1330% ( 51) 00:10:04.676 14596.655 - 14656.233: 93.6007% ( 47) 00:10:04.676 14656.233 - 14715.811: 94.0983% ( 50) 00:10:04.676 14715.811 - 14775.389: 94.5064% ( 41) 00:10:04.676 14775.389 - 14834.967: 94.8846% ( 38) 00:10:04.676 14834.967 - 14894.545: 95.1732% ( 29) 00:10:04.676 14894.545 - 14954.124: 95.5016% ( 33) 00:10:04.676 14954.124 - 15013.702: 95.9096% ( 41) 00:10:04.676 15013.702 - 15073.280: 96.1982% ( 29) 00:10:04.676 15073.280 - 15132.858: 96.4570% ( 26) 00:10:04.676 15132.858 - 15192.436: 96.7257% ( 27) 00:10:04.676 15192.436 - 15252.015: 96.9845% ( 26) 00:10:04.676 15252.015 - 15371.171: 97.6612% ( 68) 00:10:04.676 15371.171 - 15490.327: 98.0892% ( 43) 00:10:04.676 15490.327 - 15609.484: 98.3877% ( 30) 00:10:04.676 15609.484 - 15728.640: 98.6266% ( 24) 00:10:04.676 15728.640 - 15847.796: 98.7162% ( 9) 00:10:04.676 15847.796 - 15966.953: 98.7261% ( 1) 00:10:04.676 26691.025 - 26810.182: 98.7361% ( 1) 00:10:04.676 26810.182 - 26929.338: 98.7759% ( 4) 00:10:04.676 26929.338 - 27048.495: 98.8256% ( 5) 00:10:04.676 27048.495 - 27167.651: 98.8654% ( 4) 00:10:04.676 27167.651 - 27286.807: 98.9152% ( 5) 00:10:04.676 27286.807 - 27405.964: 98.9650% ( 5) 00:10:04.676 27405.964 - 27525.120: 98.9948% ( 3) 00:10:04.676 27525.120 - 27644.276: 99.0346% ( 4) 00:10:04.676 27644.276 - 27763.433: 99.0844% ( 5) 00:10:04.676 27763.433 - 27882.589: 99.1342% ( 5) 00:10:04.676 27882.589 - 28001.745: 99.1839% ( 5) 00:10:04.676 28001.745 - 28120.902: 99.2237% ( 4) 00:10:04.676 28120.902 - 28240.058: 99.2735% ( 5) 00:10:04.676 28240.058 - 28359.215: 99.3133% ( 4) 00:10:04.676 28359.215 - 28478.371: 99.3631% ( 5) 00:10:04.676 35746.909 - 35985.222: 99.4327% ( 7) 00:10:04.676 35985.222 - 36223.535: 99.5422% ( 11) 00:10:04.676 36223.535 - 36461.847: 99.6417% ( 10) 00:10:04.676 36461.847 - 36700.160: 99.7313% ( 9) 00:10:04.676 36700.160 - 36938.473: 99.8408% ( 11) 00:10:04.676 36938.473 - 37176.785: 99.9403% ( 10) 00:10:04.676 37176.785 - 37415.098: 100.0000% ( 6) 00:10:04.676 00:10:04.676 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:04.676 ============================================================================== 00:10:04.676 Range in us Cumulative IO count 00:10:04.676 5540.771 - 5570.560: 0.0198% ( 2) 00:10:04.676 5570.560 - 5600.349: 0.0297% ( 1) 00:10:04.676 5600.349 - 5630.138: 0.0593% ( 3) 00:10:04.676 5630.138 - 5659.927: 0.0989% ( 4) 00:10:04.676 5659.927 - 5689.716: 0.1286% ( 3) 00:10:04.676 5689.716 - 5719.505: 0.1681% ( 4) 00:10:04.676 5719.505 - 5749.295: 0.2275% ( 6) 00:10:04.676 5749.295 - 5779.084: 0.2868% ( 6) 00:10:04.676 5779.084 - 5808.873: 0.4055% ( 12) 00:10:04.676 5808.873 - 5838.662: 0.4450% ( 4) 00:10:04.676 5838.662 - 5868.451: 0.4846% ( 4) 00:10:04.676 5868.451 - 5898.240: 0.5044% ( 2) 00:10:04.676 5898.240 - 5928.029: 0.5142% ( 1) 00:10:04.676 5928.029 - 5957.818: 0.5340% ( 2) 00:10:04.676 5957.818 - 5987.607: 0.5538% ( 2) 00:10:04.676 5987.607 - 6017.396: 0.5637% ( 1) 00:10:04.676 6017.396 - 6047.185: 0.5835% ( 2) 00:10:04.676 6047.185 - 6076.975: 0.6032% ( 2) 00:10:04.676 6076.975 - 6106.764: 0.6230% ( 2) 00:10:04.676 6106.764 - 6136.553: 0.6329% ( 1) 00:10:04.676 9175.040 - 9234.618: 0.7318% ( 10) 00:10:04.676 9234.618 - 9294.196: 0.8505% ( 12) 00:10:04.676 9294.196 - 9353.775: 0.9296% ( 8) 00:10:04.676 9353.775 - 9413.353: 0.9593% ( 3) 00:10:04.676 9413.353 - 9472.931: 0.9889% ( 3) 00:10:04.676 9472.931 - 9532.509: 1.0186% ( 3) 00:10:04.676 9532.509 - 9592.087: 1.0483% ( 3) 00:10:04.676 9592.087 - 9651.665: 1.0779% ( 3) 00:10:04.676 9651.665 - 9711.244: 1.1175% ( 4) 00:10:04.676 9711.244 - 9770.822: 1.1570% ( 4) 00:10:04.676 9770.822 - 9830.400: 1.1966% ( 4) 00:10:04.676 9830.400 - 9889.978: 1.2460% ( 5) 00:10:04.676 9889.978 - 9949.556: 1.2658% ( 2) 00:10:04.676 10068.713 - 10128.291: 1.2955% ( 3) 00:10:04.676 10128.291 - 10187.869: 1.3647% ( 7) 00:10:04.676 10187.869 - 10247.447: 1.4339% ( 7) 00:10:04.676 10247.447 - 10307.025: 1.5724% ( 14) 00:10:04.676 10307.025 - 10366.604: 1.8790% ( 31) 00:10:04.676 10366.604 - 10426.182: 2.1954% ( 32) 00:10:04.676 10426.182 - 10485.760: 2.6701% ( 48) 00:10:04.676 10485.760 - 10545.338: 3.3525% ( 69) 00:10:04.676 10545.338 - 10604.916: 4.2820% ( 94) 00:10:04.676 10604.916 - 10664.495: 5.0237% ( 75) 00:10:04.676 10664.495 - 10724.073: 6.0324% ( 102) 00:10:04.676 10724.073 - 10783.651: 6.9225% ( 90) 00:10:04.676 10783.651 - 10843.229: 7.7927% ( 88) 00:10:04.676 10843.229 - 10902.807: 8.4949% ( 71) 00:10:04.676 10902.807 - 10962.385: 9.1673% ( 68) 00:10:04.676 10962.385 - 11021.964: 9.9782% ( 82) 00:10:04.676 11021.964 - 11081.542: 10.8683% ( 90) 00:10:04.676 11081.542 - 11141.120: 12.2627% ( 141) 00:10:04.676 11141.120 - 11200.698: 14.2306% ( 199) 00:10:04.676 11200.698 - 11260.276: 16.4854% ( 228) 00:10:04.676 11260.276 - 11319.855: 18.9478% ( 249) 00:10:04.676 11319.855 - 11379.433: 21.3805% ( 246) 00:10:04.676 11379.433 - 11439.011: 23.5463% ( 219) 00:10:04.676 11439.011 - 11498.589: 25.5142% ( 199) 00:10:04.676 11498.589 - 11558.167: 27.6108% ( 212) 00:10:04.676 11558.167 - 11617.745: 29.6084% ( 202) 00:10:04.676 11617.745 - 11677.324: 31.5862% ( 200) 00:10:04.676 11677.324 - 11736.902: 33.9003% ( 234) 00:10:04.676 11736.902 - 11796.480: 35.8287% ( 195) 00:10:04.676 11796.480 - 11856.058: 37.8165% ( 201) 00:10:04.676 11856.058 - 11915.636: 40.3283% ( 254) 00:10:04.676 11915.636 - 11975.215: 42.8600% ( 256) 00:10:04.676 11975.215 - 12034.793: 45.1543% ( 232) 00:10:04.676 12034.793 - 12094.371: 46.7168% ( 158) 00:10:04.676 12094.371 - 12153.949: 48.4375% ( 174) 00:10:04.676 12153.949 - 12213.527: 50.2670% ( 185) 00:10:04.676 12213.527 - 12273.105: 51.9284% ( 168) 00:10:04.676 12273.105 - 12332.684: 53.4513% ( 154) 00:10:04.676 12332.684 - 12392.262: 55.1523% ( 172) 00:10:04.676 12392.262 - 12451.840: 56.2797% ( 114) 00:10:04.676 12451.840 - 12511.418: 57.4070% ( 114) 00:10:04.676 12511.418 - 12570.996: 58.4059% ( 101) 00:10:04.676 12570.996 - 12630.575: 59.2959% ( 90) 00:10:04.676 12630.575 - 12690.153: 60.2650% ( 98) 00:10:04.676 12690.153 - 12749.731: 61.2836% ( 103) 00:10:04.676 12749.731 - 12809.309: 62.1737% ( 90) 00:10:04.676 12809.309 - 12868.887: 63.1032% ( 94) 00:10:04.676 12868.887 - 12928.465: 63.9241% ( 83) 00:10:04.676 12928.465 - 12988.044: 64.7547% ( 84) 00:10:04.676 12988.044 - 13047.622: 65.5657% ( 82) 00:10:04.676 13047.622 - 13107.200: 66.3172% ( 76) 00:10:04.676 13107.200 - 13166.778: 66.8612% ( 55) 00:10:04.676 13166.778 - 13226.356: 67.4051% ( 55) 00:10:04.676 13226.356 - 13285.935: 68.0775% ( 68) 00:10:04.676 13285.935 - 13345.513: 68.7006% ( 63) 00:10:04.676 13345.513 - 13405.091: 69.4324% ( 74) 00:10:04.676 13405.091 - 13464.669: 70.3422% ( 92) 00:10:04.676 13464.669 - 13524.247: 71.5981% ( 127) 00:10:04.676 13524.247 - 13583.825: 73.2298% ( 165) 00:10:04.676 13583.825 - 13643.404: 74.7923% ( 158) 00:10:04.676 13643.404 - 13702.982: 76.1669% ( 139) 00:10:04.676 13702.982 - 13762.560: 77.3141% ( 116) 00:10:04.676 13762.560 - 13822.138: 78.5403% ( 124) 00:10:04.676 13822.138 - 13881.716: 79.6974% ( 117) 00:10:04.676 13881.716 - 13941.295: 80.9335% ( 125) 00:10:04.676 13941.295 - 14000.873: 82.1598% ( 124) 00:10:04.676 14000.873 - 14060.451: 83.2872% ( 114) 00:10:04.676 14060.451 - 14120.029: 84.6816% ( 141) 00:10:04.676 14120.029 - 14179.607: 85.9672% ( 130) 00:10:04.676 14179.607 - 14239.185: 87.2923% ( 134) 00:10:04.676 14239.185 - 14298.764: 88.6373% ( 136) 00:10:04.676 14298.764 - 14358.342: 89.8141% ( 119) 00:10:04.676 14358.342 - 14417.920: 90.8821% ( 108) 00:10:04.676 14417.920 - 14477.498: 91.8117% ( 94) 00:10:04.676 14477.498 - 14537.076: 92.5237% ( 72) 00:10:04.676 14537.076 - 14596.655: 93.0775% ( 56) 00:10:04.676 14596.655 - 14656.233: 93.5918% ( 52) 00:10:04.676 14656.233 - 14715.811: 94.0368% ( 45) 00:10:04.676 14715.811 - 14775.389: 94.4126% ( 38) 00:10:04.676 14775.389 - 14834.967: 94.7983% ( 39) 00:10:04.676 14834.967 - 14894.545: 95.1839% ( 39) 00:10:04.676 14894.545 - 14954.124: 95.5301% ( 35) 00:10:04.676 14954.124 - 15013.702: 95.8663% ( 34) 00:10:04.676 15013.702 - 15073.280: 96.1926% ( 33) 00:10:04.676 15073.280 - 15132.858: 96.5487% ( 36) 00:10:04.676 15132.858 - 15192.436: 96.7959% ( 25) 00:10:04.676 15192.436 - 15252.015: 96.9739% ( 18) 00:10:04.676 15252.015 - 15371.171: 97.2903% ( 32) 00:10:04.676 15371.171 - 15490.327: 97.6562% ( 37) 00:10:04.676 15490.327 - 15609.484: 98.2793% ( 63) 00:10:04.676 15609.484 - 15728.640: 98.5067% ( 23) 00:10:04.676 15728.640 - 15847.796: 98.6748% ( 17) 00:10:04.676 15847.796 - 15966.953: 98.7342% ( 6) 00:10:04.676 18111.767 - 18230.924: 98.7441% ( 1) 00:10:04.676 18230.924 - 18350.080: 98.7737% ( 3) 00:10:04.676 18350.080 - 18469.236: 98.8232% ( 5) 00:10:04.676 18469.236 - 18588.393: 98.8726% ( 5) 00:10:04.676 18588.393 - 18707.549: 98.9122% ( 4) 00:10:04.676 18707.549 - 18826.705: 98.9517% ( 4) 00:10:04.676 18826.705 - 18945.862: 99.0012% ( 5) 00:10:04.676 18945.862 - 19065.018: 99.0506% ( 5) 00:10:04.676 19065.018 - 19184.175: 99.0902% ( 4) 00:10:04.676 19184.175 - 19303.331: 99.1297% ( 4) 00:10:04.676 19303.331 - 19422.487: 99.1792% ( 5) 00:10:04.676 19422.487 - 19541.644: 99.2188% ( 4) 00:10:04.676 19541.644 - 19660.800: 99.2583% ( 4) 00:10:04.676 19660.800 - 19779.956: 99.3078% ( 5) 00:10:04.676 19779.956 - 19899.113: 99.3473% ( 4) 00:10:04.676 19899.113 - 20018.269: 99.3671% ( 2) 00:10:04.676 27167.651 - 27286.807: 99.3869% ( 2) 00:10:04.676 27286.807 - 27405.964: 99.4363% ( 5) 00:10:04.676 27405.964 - 27525.120: 99.4858% ( 5) 00:10:04.676 27525.120 - 27644.276: 99.5352% ( 5) 00:10:04.676 27644.276 - 27763.433: 99.5847% ( 5) 00:10:04.676 27763.433 - 27882.589: 99.6242% ( 4) 00:10:04.676 27882.589 - 28001.745: 99.6737% ( 5) 00:10:04.676 28001.745 - 28120.902: 99.7231% ( 5) 00:10:04.676 28120.902 - 28240.058: 99.7725% ( 5) 00:10:04.677 28240.058 - 28359.215: 99.8121% ( 4) 00:10:04.677 28359.215 - 28478.371: 99.8616% ( 5) 00:10:04.677 28478.371 - 28597.527: 99.9110% ( 5) 00:10:04.677 28597.527 - 28716.684: 99.9604% ( 5) 00:10:04.677 28716.684 - 28835.840: 100.0000% ( 4) 00:10:04.677 00:10:04.677 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:04.677 ============================================================================== 00:10:04.677 Range in us Cumulative IO count 00:10:04.677 5183.302 - 5213.091: 0.0099% ( 1) 00:10:04.677 5362.036 - 5391.825: 0.0297% ( 2) 00:10:04.677 5391.825 - 5421.615: 0.0692% ( 4) 00:10:04.677 5421.615 - 5451.404: 0.1088% ( 4) 00:10:04.677 5451.404 - 5481.193: 0.1582% ( 5) 00:10:04.677 5481.193 - 5510.982: 0.4252% ( 27) 00:10:04.677 5510.982 - 5540.771: 0.4747% ( 5) 00:10:04.677 5540.771 - 5570.560: 0.4945% ( 2) 00:10:04.677 5570.560 - 5600.349: 0.5044% ( 1) 00:10:04.677 5600.349 - 5630.138: 0.5241% ( 2) 00:10:04.677 5630.138 - 5659.927: 0.5439% ( 2) 00:10:04.677 5659.927 - 5689.716: 0.5637% ( 2) 00:10:04.677 5689.716 - 5719.505: 0.5835% ( 2) 00:10:04.677 5719.505 - 5749.295: 0.6032% ( 2) 00:10:04.677 5749.295 - 5779.084: 0.6230% ( 2) 00:10:04.677 5779.084 - 5808.873: 0.6329% ( 1) 00:10:04.677 8877.149 - 8936.727: 0.6626% ( 3) 00:10:04.677 8936.727 - 8996.305: 0.7516% ( 9) 00:10:04.677 8996.305 - 9055.884: 0.8505% ( 10) 00:10:04.677 9055.884 - 9115.462: 0.9494% ( 10) 00:10:04.677 9115.462 - 9175.040: 1.0186% ( 7) 00:10:04.677 9175.040 - 9234.618: 1.0581% ( 4) 00:10:04.677 9234.618 - 9294.196: 1.0878% ( 3) 00:10:04.677 9294.196 - 9353.775: 1.1274% ( 4) 00:10:04.677 9353.775 - 9413.353: 1.1570% ( 3) 00:10:04.677 9413.353 - 9472.931: 1.1966% ( 4) 00:10:04.677 9472.931 - 9532.509: 1.2263% ( 3) 00:10:04.677 9532.509 - 9592.087: 1.2658% ( 4) 00:10:04.677 9889.978 - 9949.556: 1.2856% ( 2) 00:10:04.677 9949.556 - 10009.135: 1.3350% ( 5) 00:10:04.677 10009.135 - 10068.713: 1.3845% ( 5) 00:10:04.677 10068.713 - 10128.291: 1.4834% ( 10) 00:10:04.677 10128.291 - 10187.869: 1.8493% ( 37) 00:10:04.677 10187.869 - 10247.447: 1.9383% ( 9) 00:10:04.677 10247.447 - 10307.025: 2.1262% ( 19) 00:10:04.677 10307.025 - 10366.604: 2.5910% ( 47) 00:10:04.677 10366.604 - 10426.182: 2.8975% ( 31) 00:10:04.677 10426.182 - 10485.760: 3.4118% ( 52) 00:10:04.677 10485.760 - 10545.338: 3.9359% ( 53) 00:10:04.677 10545.338 - 10604.916: 4.6183% ( 69) 00:10:04.677 10604.916 - 10664.495: 5.3105% ( 70) 00:10:04.677 10664.495 - 10724.073: 5.8248% ( 52) 00:10:04.677 10724.073 - 10783.651: 6.7247% ( 91) 00:10:04.677 10783.651 - 10843.229: 7.4367% ( 72) 00:10:04.677 10843.229 - 10902.807: 7.9114% ( 48) 00:10:04.677 10902.807 - 10962.385: 8.5443% ( 64) 00:10:04.677 10962.385 - 11021.964: 9.6321% ( 110) 00:10:04.677 11021.964 - 11081.542: 11.1155% ( 150) 00:10:04.677 11081.542 - 11141.120: 13.0736% ( 198) 00:10:04.677 11141.120 - 11200.698: 15.0514% ( 200) 00:10:04.677 11200.698 - 11260.276: 17.3161% ( 229) 00:10:04.677 11260.276 - 11319.855: 19.4719% ( 218) 00:10:04.677 11319.855 - 11379.433: 21.5684% ( 212) 00:10:04.677 11379.433 - 11439.011: 23.6254% ( 208) 00:10:04.677 11439.011 - 11498.589: 25.1879% ( 158) 00:10:04.677 11498.589 - 11558.167: 26.8592% ( 169) 00:10:04.677 11558.167 - 11617.745: 28.8074% ( 197) 00:10:04.677 11617.745 - 11677.324: 31.0720% ( 229) 00:10:04.677 11677.324 - 11736.902: 33.3465% ( 230) 00:10:04.677 11736.902 - 11796.480: 35.3343% ( 201) 00:10:04.677 11796.480 - 11856.058: 37.4308% ( 212) 00:10:04.677 11856.058 - 11915.636: 40.1602% ( 276) 00:10:04.677 11915.636 - 11975.215: 42.3161% ( 218) 00:10:04.677 11975.215 - 12034.793: 44.6301% ( 234) 00:10:04.677 12034.793 - 12094.371: 46.7860% ( 218) 00:10:04.677 12094.371 - 12153.949: 48.9023% ( 214) 00:10:04.677 12153.949 - 12213.527: 50.4648% ( 158) 00:10:04.677 12213.527 - 12273.105: 52.3932% ( 195) 00:10:04.677 12273.105 - 12332.684: 53.9755% ( 160) 00:10:04.677 12332.684 - 12392.262: 55.3995% ( 144) 00:10:04.677 12392.262 - 12451.840: 56.6456% ( 126) 00:10:04.677 12451.840 - 12511.418: 57.9707% ( 134) 00:10:04.677 12511.418 - 12570.996: 58.9597% ( 100) 00:10:04.677 12570.996 - 12630.575: 59.8695% ( 92) 00:10:04.677 12630.575 - 12690.153: 60.6903% ( 83) 00:10:04.677 12690.153 - 12749.731: 61.5111% ( 83) 00:10:04.677 12749.731 - 12809.309: 62.3121% ( 81) 00:10:04.677 12809.309 - 12868.887: 63.0241% ( 72) 00:10:04.677 12868.887 - 12928.465: 63.6669% ( 65) 00:10:04.677 12928.465 - 12988.044: 64.2009% ( 54) 00:10:04.677 12988.044 - 13047.622: 64.7547% ( 56) 00:10:04.677 13047.622 - 13107.200: 65.5756% ( 83) 00:10:04.677 13107.200 - 13166.778: 66.2184% ( 65) 00:10:04.677 13166.778 - 13226.356: 67.0095% ( 80) 00:10:04.677 13226.356 - 13285.935: 67.8402% ( 84) 00:10:04.677 13285.935 - 13345.513: 68.6017% ( 77) 00:10:04.677 13345.513 - 13405.091: 69.6301% ( 104) 00:10:04.677 13405.091 - 13464.669: 70.7278% ( 111) 00:10:04.677 13464.669 - 13524.247: 72.1420% ( 143) 00:10:04.677 13524.247 - 13583.825: 73.3782% ( 125) 00:10:04.677 13583.825 - 13643.404: 74.7725% ( 141) 00:10:04.677 13643.404 - 13702.982: 75.8900% ( 113) 00:10:04.677 13702.982 - 13762.560: 77.1855% ( 131) 00:10:04.677 13762.560 - 13822.138: 78.3722% ( 120) 00:10:04.677 13822.138 - 13881.716: 79.7666% ( 141) 00:10:04.677 13881.716 - 13941.295: 81.3192% ( 157) 00:10:04.677 13941.295 - 14000.873: 82.7235% ( 142) 00:10:04.677 14000.873 - 14060.451: 84.0289% ( 132) 00:10:04.677 14060.451 - 14120.029: 85.3343% ( 132) 00:10:04.677 14120.029 - 14179.607: 86.6100% ( 129) 00:10:04.677 14179.607 - 14239.185: 88.0241% ( 143) 00:10:04.677 14239.185 - 14298.764: 89.2504% ( 124) 00:10:04.677 14298.764 - 14358.342: 90.2888% ( 105) 00:10:04.677 14358.342 - 14417.920: 91.3172% ( 104) 00:10:04.677 14417.920 - 14477.498: 92.1183% ( 81) 00:10:04.677 14477.498 - 14537.076: 92.7215% ( 61) 00:10:04.677 14537.076 - 14596.655: 93.4335% ( 72) 00:10:04.677 14596.655 - 14656.233: 93.9478% ( 52) 00:10:04.677 14656.233 - 14715.811: 94.3829% ( 44) 00:10:04.677 14715.811 - 14775.389: 94.7290% ( 35) 00:10:04.677 14775.389 - 14834.967: 95.0455% ( 32) 00:10:04.677 14834.967 - 14894.545: 95.3422% ( 30) 00:10:04.677 14894.545 - 14954.124: 95.6685% ( 33) 00:10:04.677 14954.124 - 15013.702: 95.9652% ( 30) 00:10:04.677 15013.702 - 15073.280: 96.2915% ( 33) 00:10:04.677 15073.280 - 15132.858: 96.5783% ( 29) 00:10:04.677 15132.858 - 15192.436: 96.8750% ( 30) 00:10:04.677 15192.436 - 15252.015: 97.1222% ( 25) 00:10:04.677 15252.015 - 15371.171: 97.6068% ( 49) 00:10:04.677 15371.171 - 15490.327: 97.8738% ( 27) 00:10:04.677 15490.327 - 15609.484: 98.0320% ( 16) 00:10:04.677 15609.484 - 15728.640: 98.1903% ( 16) 00:10:04.677 15728.640 - 15847.796: 98.3089% ( 12) 00:10:04.677 15847.796 - 15966.953: 98.4177% ( 11) 00:10:04.677 15966.953 - 16086.109: 98.5364% ( 12) 00:10:04.677 16086.109 - 16205.265: 98.6155% ( 8) 00:10:04.677 16205.265 - 16324.422: 98.6650% ( 5) 00:10:04.677 16324.422 - 16443.578: 98.7045% ( 4) 00:10:04.677 16443.578 - 16562.735: 98.7342% ( 3) 00:10:04.677 17515.985 - 17635.142: 98.7638% ( 3) 00:10:04.677 17635.142 - 17754.298: 98.8034% ( 4) 00:10:04.677 17754.298 - 17873.455: 98.8726% ( 7) 00:10:04.677 17873.455 - 17992.611: 98.9320% ( 6) 00:10:04.677 17992.611 - 18111.767: 98.9814% ( 5) 00:10:04.677 18111.767 - 18230.924: 99.0506% ( 7) 00:10:04.677 18230.924 - 18350.080: 99.0902% ( 4) 00:10:04.677 18350.080 - 18469.236: 99.1297% ( 4) 00:10:04.677 18469.236 - 18588.393: 99.1693% ( 4) 00:10:04.677 18588.393 - 18707.549: 99.1990% ( 3) 00:10:04.677 18707.549 - 18826.705: 99.2385% ( 4) 00:10:04.677 18826.705 - 18945.862: 99.2781% ( 4) 00:10:04.677 18945.862 - 19065.018: 99.3176% ( 4) 00:10:04.677 19065.018 - 19184.175: 99.3572% ( 4) 00:10:04.677 19184.175 - 19303.331: 99.3671% ( 1) 00:10:04.677 25856.931 - 25976.087: 99.3770% ( 1) 00:10:04.677 25976.087 - 26095.244: 99.3968% ( 2) 00:10:04.677 26095.244 - 26214.400: 99.4264% ( 3) 00:10:04.677 26214.400 - 26333.556: 99.4462% ( 2) 00:10:04.677 26333.556 - 26452.713: 99.4956% ( 5) 00:10:04.677 26810.182 - 26929.338: 99.5154% ( 2) 00:10:04.677 26929.338 - 27048.495: 99.5451% ( 3) 00:10:04.678 27048.495 - 27167.651: 99.5945% ( 5) 00:10:04.678 27167.651 - 27286.807: 99.6440% ( 5) 00:10:04.678 27286.807 - 27405.964: 99.6835% ( 4) 00:10:04.678 27405.964 - 27525.120: 99.7330% ( 5) 00:10:04.678 27525.120 - 27644.276: 99.7725% ( 4) 00:10:04.678 27644.276 - 27763.433: 99.8220% ( 5) 00:10:04.678 27763.433 - 27882.589: 99.8616% ( 4) 00:10:04.678 27882.589 - 28001.745: 99.9011% ( 4) 00:10:04.678 28001.745 - 28120.902: 99.9506% ( 5) 00:10:04.678 28120.902 - 28240.058: 99.9901% ( 4) 00:10:04.678 28240.058 - 28359.215: 100.0000% ( 1) 00:10:04.678 00:10:04.678 20:26:58 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:04.678 00:10:04.678 real 0m2.747s 00:10:04.678 user 0m2.264s 00:10:04.678 sys 0m0.356s 00:10:04.678 20:26:58 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.678 ************************************ 00:10:04.678 END TEST nvme_perf 00:10:04.678 20:26:58 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:04.678 ************************************ 00:10:04.678 20:26:58 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:04.678 20:26:58 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:04.678 20:26:58 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:04.678 20:26:58 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.678 20:26:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.678 ************************************ 00:10:04.678 START TEST nvme_hello_world 00:10:04.678 ************************************ 00:10:04.678 20:26:58 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:04.937 Initializing NVMe Controllers 00:10:04.937 Attached to 0000:00:10.0 00:10:04.937 Namespace ID: 1 size: 6GB 00:10:04.937 Attached to 0000:00:11.0 00:10:04.937 Namespace ID: 1 size: 5GB 00:10:04.937 Attached to 0000:00:13.0 00:10:04.937 Namespace ID: 1 size: 1GB 00:10:04.937 Attached to 0000:00:12.0 00:10:04.937 Namespace ID: 1 size: 4GB 00:10:04.937 Namespace ID: 2 size: 4GB 00:10:04.937 Namespace ID: 3 size: 4GB 00:10:04.937 Initialization complete. 00:10:04.937 INFO: using host memory buffer for IO 00:10:04.937 Hello world! 00:10:04.937 INFO: using host memory buffer for IO 00:10:04.937 Hello world! 00:10:04.937 INFO: using host memory buffer for IO 00:10:04.937 Hello world! 00:10:04.937 INFO: using host memory buffer for IO 00:10:04.937 Hello world! 00:10:04.937 INFO: using host memory buffer for IO 00:10:04.937 Hello world! 00:10:04.937 INFO: using host memory buffer for IO 00:10:04.937 Hello world! 00:10:04.937 00:10:04.937 real 0m0.282s 00:10:04.937 user 0m0.094s 00:10:04.937 sys 0m0.146s 00:10:04.937 20:26:58 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.937 20:26:58 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:04.937 ************************************ 00:10:04.937 END TEST nvme_hello_world 00:10:04.937 ************************************ 00:10:04.937 20:26:58 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:04.937 20:26:58 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:04.937 20:26:58 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:04.937 20:26:58 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.937 20:26:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.937 ************************************ 00:10:04.937 START TEST nvme_sgl 00:10:04.937 ************************************ 00:10:04.937 20:26:58 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:05.195 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:05.195 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:05.195 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:05.195 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:05.195 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:05.195 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:05.195 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:05.195 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:05.195 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:05.195 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:05.195 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:05.195 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:05.195 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:05.195 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:05.195 NVMe Readv/Writev Request test 00:10:05.195 Attached to 0000:00:10.0 00:10:05.195 Attached to 0000:00:11.0 00:10:05.195 Attached to 0000:00:13.0 00:10:05.195 Attached to 0000:00:12.0 00:10:05.195 0000:00:10.0: build_io_request_2 test passed 00:10:05.195 0000:00:10.0: build_io_request_4 test passed 00:10:05.195 0000:00:10.0: build_io_request_5 test passed 00:10:05.195 0000:00:10.0: build_io_request_6 test passed 00:10:05.195 0000:00:10.0: build_io_request_7 test passed 00:10:05.195 0000:00:10.0: build_io_request_10 test passed 00:10:05.195 0000:00:11.0: build_io_request_2 test passed 00:10:05.195 0000:00:11.0: build_io_request_4 test passed 00:10:05.195 0000:00:11.0: build_io_request_5 test passed 00:10:05.195 0000:00:11.0: build_io_request_6 test passed 00:10:05.195 0000:00:11.0: build_io_request_7 test passed 00:10:05.195 0000:00:11.0: build_io_request_10 test passed 00:10:05.195 Cleaning up... 00:10:05.195 00:10:05.195 real 0m0.378s 00:10:05.195 user 0m0.178s 00:10:05.195 sys 0m0.151s 00:10:05.195 20:26:59 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.195 ************************************ 00:10:05.195 END TEST nvme_sgl 00:10:05.195 20:26:59 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:05.195 ************************************ 00:10:05.454 20:26:59 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:05.454 20:26:59 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:05.454 20:26:59 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.454 20:26:59 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.454 20:26:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.454 ************************************ 00:10:05.454 START TEST nvme_e2edp 00:10:05.454 ************************************ 00:10:05.454 20:26:59 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:05.713 NVMe Write/Read with End-to-End data protection test 00:10:05.713 Attached to 0000:00:10.0 00:10:05.713 Attached to 0000:00:11.0 00:10:05.713 Attached to 0000:00:13.0 00:10:05.713 Attached to 0000:00:12.0 00:10:05.713 Cleaning up... 00:10:05.713 00:10:05.713 real 0m0.261s 00:10:05.713 user 0m0.085s 00:10:05.713 sys 0m0.131s 00:10:05.713 20:26:59 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.713 ************************************ 00:10:05.713 END TEST nvme_e2edp 00:10:05.713 20:26:59 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:05.713 ************************************ 00:10:05.713 20:26:59 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:05.713 20:26:59 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:05.713 20:26:59 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.713 20:26:59 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.713 20:26:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.713 ************************************ 00:10:05.713 START TEST nvme_reserve 00:10:05.713 ************************************ 00:10:05.713 20:26:59 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:05.973 ===================================================== 00:10:05.973 NVMe Controller at PCI bus 0, device 16, function 0 00:10:05.973 ===================================================== 00:10:05.973 Reservations: Not Supported 00:10:05.973 ===================================================== 00:10:05.973 NVMe Controller at PCI bus 0, device 17, function 0 00:10:05.973 ===================================================== 00:10:05.973 Reservations: Not Supported 00:10:05.974 ===================================================== 00:10:05.974 NVMe Controller at PCI bus 0, device 19, function 0 00:10:05.974 ===================================================== 00:10:05.974 Reservations: Not Supported 00:10:05.974 ===================================================== 00:10:05.974 NVMe Controller at PCI bus 0, device 18, function 0 00:10:05.974 ===================================================== 00:10:05.974 Reservations: Not Supported 00:10:05.974 Reservation test passed 00:10:05.974 00:10:05.974 real 0m0.232s 00:10:05.974 user 0m0.081s 00:10:05.974 sys 0m0.112s 00:10:05.974 20:26:59 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.974 ************************************ 00:10:05.974 END TEST nvme_reserve 00:10:05.974 20:26:59 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:05.974 ************************************ 00:10:05.974 20:26:59 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:05.974 20:26:59 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:05.974 20:26:59 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.974 20:26:59 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.974 20:26:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.974 ************************************ 00:10:05.974 START TEST nvme_err_injection 00:10:05.974 ************************************ 00:10:05.974 20:26:59 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:06.233 NVMe Error Injection test 00:10:06.233 Attached to 0000:00:10.0 00:10:06.233 Attached to 0000:00:11.0 00:10:06.233 Attached to 0000:00:13.0 00:10:06.233 Attached to 0000:00:12.0 00:10:06.233 0000:00:10.0: get features failed as expected 00:10:06.233 0000:00:11.0: get features failed as expected 00:10:06.233 0000:00:13.0: get features failed as expected 00:10:06.233 0000:00:12.0: get features failed as expected 00:10:06.233 0000:00:10.0: get features successfully as expected 00:10:06.233 0000:00:11.0: get features successfully as expected 00:10:06.233 0000:00:13.0: get features successfully as expected 00:10:06.233 0000:00:12.0: get features successfully as expected 00:10:06.233 0000:00:12.0: read failed as expected 00:10:06.233 0000:00:10.0: read failed as expected 00:10:06.233 0000:00:11.0: read failed as expected 00:10:06.233 0000:00:13.0: read failed as expected 00:10:06.233 0000:00:11.0: read successfully as expected 00:10:06.233 0000:00:13.0: read successfully as expected 00:10:06.233 0000:00:12.0: read successfully as expected 00:10:06.233 0000:00:10.0: read successfully as expected 00:10:06.233 Cleaning up... 00:10:06.233 00:10:06.233 real 0m0.296s 00:10:06.233 user 0m0.107s 00:10:06.233 sys 0m0.139s 00:10:06.233 20:27:00 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.233 20:27:00 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:06.233 ************************************ 00:10:06.233 END TEST nvme_err_injection 00:10:06.233 ************************************ 00:10:06.233 20:27:00 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:06.233 20:27:00 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:06.233 20:27:00 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:06.233 20:27:00 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.233 20:27:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.233 ************************************ 00:10:06.233 START TEST nvme_overhead 00:10:06.233 ************************************ 00:10:06.233 20:27:00 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:07.610 Initializing NVMe Controllers 00:10:07.610 Attached to 0000:00:10.0 00:10:07.610 Attached to 0000:00:11.0 00:10:07.610 Attached to 0000:00:13.0 00:10:07.610 Attached to 0000:00:12.0 00:10:07.610 Initialization complete. Launching workers. 00:10:07.610 submit (in ns) avg, min, max = 15953.9, 13170.0, 92585.9 00:10:07.610 complete (in ns) avg, min, max = 11662.9, 9564.1, 203841.4 00:10:07.610 00:10:07.610 Submit histogram 00:10:07.610 ================ 00:10:07.610 Range in us Cumulative Count 00:10:07.610 13.149 - 13.207: 0.0422% ( 4) 00:10:07.610 13.207 - 13.265: 0.1688% ( 12) 00:10:07.610 13.265 - 13.324: 1.4453% ( 121) 00:10:07.610 13.324 - 13.382: 5.0744% ( 344) 00:10:07.610 13.382 - 13.440: 12.7334% ( 726) 00:10:07.610 13.440 - 13.498: 24.1692% ( 1084) 00:10:07.610 13.498 - 13.556: 36.0798% ( 1129) 00:10:07.610 13.556 - 13.615: 46.0281% ( 943) 00:10:07.610 13.615 - 13.673: 53.5394% ( 712) 00:10:07.610 13.673 - 13.731: 58.8142% ( 500) 00:10:07.610 13.731 - 13.789: 62.8336% ( 381) 00:10:07.610 13.789 - 13.847: 65.8508% ( 286) 00:10:07.610 13.847 - 13.905: 68.0979% ( 213) 00:10:07.610 13.905 - 13.964: 69.8069% ( 162) 00:10:07.610 13.964 - 14.022: 71.3894% ( 150) 00:10:07.610 14.022 - 14.080: 72.8558% ( 139) 00:10:07.610 14.080 - 14.138: 73.8897% ( 98) 00:10:07.610 14.138 - 14.196: 74.9657% ( 102) 00:10:07.610 14.196 - 14.255: 75.8308% ( 82) 00:10:07.610 14.255 - 14.313: 76.4005% ( 54) 00:10:07.610 14.313 - 14.371: 76.9174% ( 49) 00:10:07.610 14.371 - 14.429: 77.3921% ( 45) 00:10:07.610 14.429 - 14.487: 77.7825% ( 37) 00:10:07.610 14.487 - 14.545: 78.0884% ( 29) 00:10:07.610 14.545 - 14.604: 78.3943% ( 29) 00:10:07.610 14.604 - 14.662: 78.7108% ( 30) 00:10:07.610 14.662 - 14.720: 78.8902% ( 17) 00:10:07.610 14.720 - 14.778: 79.0273% ( 13) 00:10:07.610 14.778 - 14.836: 79.1434% ( 11) 00:10:07.610 14.836 - 14.895: 79.3016% ( 15) 00:10:07.610 14.895 - 15.011: 79.6603% ( 34) 00:10:07.610 15.011 - 15.127: 79.8924% ( 22) 00:10:07.610 15.127 - 15.244: 80.1350% ( 23) 00:10:07.610 15.244 - 15.360: 80.2933% ( 15) 00:10:07.610 15.360 - 15.476: 80.4515% ( 15) 00:10:07.610 15.476 - 15.593: 80.5887% ( 13) 00:10:07.610 15.593 - 15.709: 80.7680% ( 17) 00:10:07.610 15.709 - 15.825: 80.8946% ( 12) 00:10:07.610 15.825 - 15.942: 81.0001% ( 10) 00:10:07.610 15.942 - 16.058: 81.0740% ( 7) 00:10:07.610 16.058 - 16.175: 81.1478% ( 7) 00:10:07.610 16.175 - 16.291: 81.2216% ( 7) 00:10:07.610 16.291 - 16.407: 81.2849% ( 6) 00:10:07.610 16.407 - 16.524: 81.2955% ( 1) 00:10:07.610 16.524 - 16.640: 81.3588% ( 6) 00:10:07.610 16.640 - 16.756: 81.4010% ( 4) 00:10:07.610 16.756 - 16.873: 81.4326% ( 3) 00:10:07.610 16.989 - 17.105: 81.4854% ( 5) 00:10:07.610 17.105 - 17.222: 81.5276% ( 4) 00:10:07.610 17.222 - 17.338: 81.6120% ( 8) 00:10:07.610 17.338 - 17.455: 81.6542% ( 4) 00:10:07.610 17.455 - 17.571: 81.7386% ( 8) 00:10:07.610 17.571 - 17.687: 81.8019% ( 6) 00:10:07.610 17.687 - 17.804: 81.8546% ( 5) 00:10:07.610 17.804 - 17.920: 81.8757% ( 2) 00:10:07.610 17.920 - 18.036: 81.9496% ( 7) 00:10:07.610 18.036 - 18.153: 81.9812% ( 3) 00:10:07.610 18.153 - 18.269: 82.0445% ( 6) 00:10:07.610 18.269 - 18.385: 82.1817% ( 13) 00:10:07.610 18.385 - 18.502: 82.2766% ( 9) 00:10:07.610 18.502 - 18.618: 82.4560% ( 17) 00:10:07.610 18.618 - 18.735: 82.6142% ( 15) 00:10:07.610 18.735 - 18.851: 82.7724% ( 15) 00:10:07.610 18.851 - 18.967: 82.9623% ( 18) 00:10:07.610 18.967 - 19.084: 83.1944% ( 22) 00:10:07.610 19.084 - 19.200: 83.3421% ( 14) 00:10:07.610 19.200 - 19.316: 83.6059% ( 25) 00:10:07.610 19.316 - 19.433: 83.8380% ( 22) 00:10:07.610 19.433 - 19.549: 83.9962% ( 15) 00:10:07.610 19.549 - 19.665: 84.1861% ( 18) 00:10:07.610 19.665 - 19.782: 84.3443% ( 15) 00:10:07.610 19.782 - 19.898: 84.5342% ( 18) 00:10:07.610 19.898 - 20.015: 84.6819% ( 14) 00:10:07.610 20.015 - 20.131: 84.7874% ( 10) 00:10:07.610 20.131 - 20.247: 84.8296% ( 4) 00:10:07.611 20.247 - 20.364: 85.0512% ( 21) 00:10:07.611 20.364 - 20.480: 85.1778% ( 12) 00:10:07.611 20.480 - 20.596: 85.3149% ( 13) 00:10:07.611 20.596 - 20.713: 85.4521% ( 13) 00:10:07.611 20.713 - 20.829: 85.6103% ( 15) 00:10:07.611 20.829 - 20.945: 85.7263% ( 11) 00:10:07.611 20.945 - 21.062: 85.8424% ( 11) 00:10:07.611 21.062 - 21.178: 85.9479% ( 10) 00:10:07.611 21.178 - 21.295: 86.1167% ( 16) 00:10:07.611 21.295 - 21.411: 86.2116% ( 9) 00:10:07.611 21.411 - 21.527: 86.3382% ( 12) 00:10:07.611 21.527 - 21.644: 86.4015% ( 6) 00:10:07.611 21.644 - 21.760: 86.5176% ( 11) 00:10:07.611 21.760 - 21.876: 86.6020% ( 8) 00:10:07.611 21.876 - 21.993: 86.6547% ( 5) 00:10:07.611 21.993 - 22.109: 86.7497% ( 9) 00:10:07.611 22.109 - 22.225: 86.8446% ( 9) 00:10:07.611 22.225 - 22.342: 86.8974% ( 5) 00:10:07.611 22.342 - 22.458: 86.9501% ( 5) 00:10:07.611 22.458 - 22.575: 87.0661% ( 11) 00:10:07.611 22.575 - 22.691: 87.1189% ( 5) 00:10:07.611 22.691 - 22.807: 87.2244% ( 10) 00:10:07.611 22.807 - 22.924: 87.2877% ( 6) 00:10:07.611 22.924 - 23.040: 87.4670% ( 17) 00:10:07.611 23.040 - 23.156: 87.5303% ( 6) 00:10:07.611 23.156 - 23.273: 87.7097% ( 17) 00:10:07.611 23.273 - 23.389: 87.8046% ( 9) 00:10:07.611 23.389 - 23.505: 87.8679% ( 6) 00:10:07.611 23.505 - 23.622: 87.9523% ( 8) 00:10:07.611 23.622 - 23.738: 88.0051% ( 5) 00:10:07.611 23.738 - 23.855: 88.1317% ( 12) 00:10:07.611 23.855 - 23.971: 88.3005% ( 16) 00:10:07.611 23.971 - 24.087: 88.4059% ( 10) 00:10:07.611 24.087 - 24.204: 88.5536% ( 14) 00:10:07.611 24.204 - 24.320: 88.6908% ( 13) 00:10:07.611 24.320 - 24.436: 88.8174% ( 12) 00:10:07.611 24.436 - 24.553: 88.8912% ( 7) 00:10:07.611 24.553 - 24.669: 89.0178% ( 12) 00:10:07.611 24.669 - 24.785: 89.1655% ( 14) 00:10:07.611 24.785 - 24.902: 89.2394% ( 7) 00:10:07.611 24.902 - 25.018: 89.3871% ( 14) 00:10:07.611 25.018 - 25.135: 89.5242% ( 13) 00:10:07.611 25.135 - 25.251: 89.6508% ( 12) 00:10:07.611 25.251 - 25.367: 89.7985% ( 14) 00:10:07.611 25.367 - 25.484: 89.8934% ( 9) 00:10:07.611 25.484 - 25.600: 90.0411% ( 14) 00:10:07.611 25.600 - 25.716: 90.1361% ( 9) 00:10:07.611 25.716 - 25.833: 90.2732% ( 13) 00:10:07.611 25.833 - 25.949: 90.4526% ( 17) 00:10:07.611 25.949 - 26.065: 90.6530% ( 19) 00:10:07.611 26.065 - 26.182: 90.8535% ( 19) 00:10:07.611 26.182 - 26.298: 90.9906% ( 13) 00:10:07.611 26.298 - 26.415: 91.1805% ( 18) 00:10:07.611 26.415 - 26.531: 91.3915% ( 20) 00:10:07.611 26.531 - 26.647: 91.5919% ( 19) 00:10:07.611 26.647 - 26.764: 91.8557% ( 25) 00:10:07.611 26.764 - 26.880: 91.9612% ( 10) 00:10:07.611 26.880 - 26.996: 92.0983% ( 13) 00:10:07.611 26.996 - 27.113: 92.3304% ( 22) 00:10:07.611 27.113 - 27.229: 92.5309% ( 19) 00:10:07.611 27.229 - 27.345: 92.6891% ( 15) 00:10:07.611 27.345 - 27.462: 92.9528% ( 25) 00:10:07.611 27.462 - 27.578: 93.1005% ( 14) 00:10:07.611 27.578 - 27.695: 93.3748% ( 26) 00:10:07.611 27.695 - 27.811: 93.5858% ( 20) 00:10:07.611 27.811 - 27.927: 93.7863% ( 19) 00:10:07.611 27.927 - 28.044: 94.0500% ( 25) 00:10:07.611 28.044 - 28.160: 94.1872% ( 13) 00:10:07.611 28.160 - 28.276: 94.3454% ( 15) 00:10:07.611 28.276 - 28.393: 94.4720% ( 12) 00:10:07.611 28.393 - 28.509: 94.6513% ( 17) 00:10:07.611 28.509 - 28.625: 94.8729% ( 21) 00:10:07.611 28.625 - 28.742: 95.0311% ( 15) 00:10:07.611 28.742 - 28.858: 95.2527% ( 21) 00:10:07.611 28.858 - 28.975: 95.5059% ( 24) 00:10:07.611 28.975 - 29.091: 95.7063% ( 19) 00:10:07.611 29.091 - 29.207: 95.9384% ( 22) 00:10:07.611 29.207 - 29.324: 96.0861% ( 14) 00:10:07.611 29.324 - 29.440: 96.3076% ( 21) 00:10:07.611 29.440 - 29.556: 96.4764% ( 16) 00:10:07.611 29.556 - 29.673: 96.7085% ( 22) 00:10:07.611 29.673 - 29.789: 96.8457% ( 13) 00:10:07.611 29.789 - 30.022: 97.2043% ( 34) 00:10:07.611 30.022 - 30.255: 97.5208% ( 30) 00:10:07.611 30.255 - 30.487: 97.7213% ( 19) 00:10:07.611 30.487 - 30.720: 97.8795% ( 15) 00:10:07.611 30.720 - 30.953: 98.0800% ( 19) 00:10:07.611 30.953 - 31.185: 98.2910% ( 20) 00:10:07.611 31.185 - 31.418: 98.4070% ( 11) 00:10:07.611 31.418 - 31.651: 98.5125% ( 10) 00:10:07.611 31.651 - 31.884: 98.5863% ( 7) 00:10:07.611 31.884 - 32.116: 98.6496% ( 6) 00:10:07.611 32.116 - 32.349: 98.6918% ( 4) 00:10:07.611 32.349 - 32.582: 98.7129% ( 2) 00:10:07.611 32.582 - 32.815: 98.7657% ( 5) 00:10:07.611 33.047 - 33.280: 98.8395% ( 7) 00:10:07.611 33.280 - 33.513: 98.8606% ( 2) 00:10:07.611 33.513 - 33.745: 98.9028% ( 4) 00:10:07.611 33.745 - 33.978: 98.9239% ( 2) 00:10:07.611 34.211 - 34.444: 98.9345% ( 1) 00:10:07.611 34.444 - 34.676: 98.9556% ( 2) 00:10:07.611 34.676 - 34.909: 98.9872% ( 3) 00:10:07.611 34.909 - 35.142: 98.9978% ( 1) 00:10:07.611 35.142 - 35.375: 99.0611% ( 6) 00:10:07.611 35.375 - 35.607: 99.1244% ( 6) 00:10:07.611 35.607 - 35.840: 99.1455% ( 2) 00:10:07.611 35.840 - 36.073: 99.2299% ( 8) 00:10:07.611 36.073 - 36.305: 99.3037% ( 7) 00:10:07.611 36.305 - 36.538: 99.3248% ( 2) 00:10:07.611 36.538 - 36.771: 99.3565% ( 3) 00:10:07.611 36.771 - 37.004: 99.3776% ( 2) 00:10:07.611 37.004 - 37.236: 99.4092% ( 3) 00:10:07.611 37.236 - 37.469: 99.4409% ( 3) 00:10:07.611 37.469 - 37.702: 99.4831% ( 4) 00:10:07.611 37.702 - 37.935: 99.5253% ( 4) 00:10:07.611 37.935 - 38.167: 99.5464% ( 2) 00:10:07.611 38.167 - 38.400: 99.5675% ( 2) 00:10:07.611 38.400 - 38.633: 99.6097% ( 4) 00:10:07.611 38.633 - 38.865: 99.6308% ( 2) 00:10:07.611 38.865 - 39.098: 99.6624% ( 3) 00:10:07.611 39.098 - 39.331: 99.6835% ( 2) 00:10:07.611 39.331 - 39.564: 99.7046% ( 2) 00:10:07.611 39.796 - 40.029: 99.7257% ( 2) 00:10:07.611 40.029 - 40.262: 99.7468% ( 2) 00:10:07.611 40.262 - 40.495: 99.7574% ( 1) 00:10:07.611 40.495 - 40.727: 99.7785% ( 2) 00:10:07.611 40.727 - 40.960: 99.7996% ( 2) 00:10:07.611 40.960 - 41.193: 99.8101% ( 1) 00:10:07.611 41.193 - 41.425: 99.8207% ( 1) 00:10:07.611 41.658 - 41.891: 99.8523% ( 3) 00:10:07.611 42.124 - 42.356: 99.8629% ( 1) 00:10:07.611 42.589 - 42.822: 99.8734% ( 1) 00:10:07.611 43.055 - 43.287: 99.8840% ( 1) 00:10:07.611 43.287 - 43.520: 99.8945% ( 1) 00:10:07.611 43.985 - 44.218: 99.9051% ( 1) 00:10:07.611 44.451 - 44.684: 99.9156% ( 1) 00:10:07.611 44.684 - 44.916: 99.9262% ( 1) 00:10:07.611 47.476 - 47.709: 99.9367% ( 1) 00:10:07.611 49.571 - 49.804: 99.9473% ( 1) 00:10:07.611 50.502 - 50.735: 99.9578% ( 1) 00:10:07.611 58.880 - 59.113: 99.9684% ( 1) 00:10:07.611 74.938 - 75.404: 99.9789% ( 1) 00:10:07.611 89.833 - 90.298: 99.9895% ( 1) 00:10:07.611 92.160 - 92.625: 100.0000% ( 1) 00:10:07.611 00:10:07.611 Complete histogram 00:10:07.611 ================== 00:10:07.611 Range in us Cumulative Count 00:10:07.611 9.542 - 9.600: 0.1582% ( 15) 00:10:07.611 9.600 - 9.658: 1.8356% ( 159) 00:10:07.611 9.658 - 9.716: 7.3214% ( 520) 00:10:07.611 9.716 - 9.775: 17.8816% ( 1001) 00:10:07.611 9.775 - 9.833: 30.1298% ( 1161) 00:10:07.611 9.833 - 9.891: 42.7999% ( 1201) 00:10:07.611 9.891 - 9.949: 51.6932% ( 843) 00:10:07.611 9.949 - 10.007: 57.2001% ( 522) 00:10:07.611 10.007 - 10.065: 60.4811% ( 311) 00:10:07.611 10.065 - 10.124: 62.2745% ( 170) 00:10:07.611 10.124 - 10.182: 63.8147% ( 146) 00:10:07.611 10.182 - 10.240: 65.3339% ( 144) 00:10:07.611 10.240 - 10.298: 67.0957% ( 167) 00:10:07.611 10.298 - 10.356: 68.8891% ( 170) 00:10:07.611 10.356 - 10.415: 70.4083% ( 144) 00:10:07.611 10.415 - 10.473: 71.5371% ( 107) 00:10:07.611 10.473 - 10.531: 73.0035% ( 139) 00:10:07.611 10.531 - 10.589: 74.4066% ( 133) 00:10:07.611 10.589 - 10.647: 75.6725% ( 120) 00:10:07.611 10.647 - 10.705: 76.6853% ( 96) 00:10:07.611 10.705 - 10.764: 77.6242% ( 89) 00:10:07.611 10.764 - 10.822: 78.0990% ( 45) 00:10:07.611 10.822 - 10.880: 78.5948% ( 47) 00:10:07.611 10.880 - 10.938: 78.8691% ( 26) 00:10:07.611 10.938 - 10.996: 79.1434% ( 26) 00:10:07.611 10.996 - 11.055: 79.3333% ( 18) 00:10:07.611 11.055 - 11.113: 79.6709% ( 32) 00:10:07.611 11.113 - 11.171: 79.8818% ( 20) 00:10:07.611 11.171 - 11.229: 80.0190% ( 13) 00:10:07.611 11.229 - 11.287: 80.2827% ( 25) 00:10:07.611 11.287 - 11.345: 80.5570% ( 26) 00:10:07.611 11.345 - 11.404: 80.7891% ( 22) 00:10:07.611 11.404 - 11.462: 81.0212% ( 22) 00:10:07.611 11.462 - 11.520: 81.2216% ( 19) 00:10:07.611 11.520 - 11.578: 81.4537% ( 22) 00:10:07.611 11.578 - 11.636: 81.6120% ( 15) 00:10:07.611 11.636 - 11.695: 81.8968% ( 27) 00:10:07.611 11.695 - 11.753: 82.0656% ( 16) 00:10:07.611 11.753 - 11.811: 82.2555% ( 18) 00:10:07.611 11.811 - 11.869: 82.4243% ( 16) 00:10:07.611 11.869 - 11.927: 82.4982% ( 7) 00:10:07.611 11.927 - 11.985: 82.5193% ( 2) 00:10:07.612 11.985 - 12.044: 82.6142% ( 9) 00:10:07.612 12.044 - 12.102: 82.6669% ( 5) 00:10:07.612 12.102 - 12.160: 82.7408% ( 7) 00:10:07.612 12.160 - 12.218: 82.7830% ( 4) 00:10:07.612 12.218 - 12.276: 82.8146% ( 3) 00:10:07.612 12.276 - 12.335: 82.8674% ( 5) 00:10:07.612 12.335 - 12.393: 82.9307% ( 6) 00:10:07.612 12.393 - 12.451: 82.9834% ( 5) 00:10:07.612 12.451 - 12.509: 83.0362% ( 5) 00:10:07.612 12.509 - 12.567: 83.0573% ( 2) 00:10:07.612 12.567 - 12.625: 83.0995% ( 4) 00:10:07.612 12.625 - 12.684: 83.1417% ( 4) 00:10:07.612 12.684 - 12.742: 83.1944% ( 5) 00:10:07.612 12.742 - 12.800: 83.2683% ( 7) 00:10:07.612 12.800 - 12.858: 83.3210% ( 5) 00:10:07.612 12.858 - 12.916: 83.3738% ( 5) 00:10:07.612 12.916 - 12.975: 83.4476% ( 7) 00:10:07.612 12.975 - 13.033: 83.5109% ( 6) 00:10:07.612 13.033 - 13.091: 83.5637% ( 5) 00:10:07.612 13.091 - 13.149: 83.6270% ( 6) 00:10:07.612 13.149 - 13.207: 83.6375% ( 1) 00:10:07.612 13.207 - 13.265: 83.7114% ( 7) 00:10:07.612 13.265 - 13.324: 83.7219% ( 1) 00:10:07.612 13.324 - 13.382: 83.7536% ( 3) 00:10:07.612 13.382 - 13.440: 83.7747% ( 2) 00:10:07.612 13.440 - 13.498: 83.8485% ( 7) 00:10:07.612 13.498 - 13.556: 83.8696% ( 2) 00:10:07.612 13.556 - 13.615: 83.9224% ( 5) 00:10:07.612 13.615 - 13.673: 83.9857% ( 6) 00:10:07.612 13.731 - 13.789: 84.0173% ( 3) 00:10:07.612 13.789 - 13.847: 84.0700% ( 5) 00:10:07.612 13.847 - 13.905: 84.0806% ( 1) 00:10:07.612 13.905 - 13.964: 84.1228% ( 4) 00:10:07.612 13.964 - 14.022: 84.1755% ( 5) 00:10:07.612 14.022 - 14.080: 84.2599% ( 8) 00:10:07.612 14.080 - 14.138: 84.2916% ( 3) 00:10:07.612 14.138 - 14.196: 84.3338% ( 4) 00:10:07.612 14.196 - 14.255: 84.3549% ( 2) 00:10:07.612 14.255 - 14.313: 84.4076% ( 5) 00:10:07.612 14.313 - 14.371: 84.4604% ( 5) 00:10:07.612 14.371 - 14.429: 84.5342% ( 7) 00:10:07.612 14.429 - 14.487: 84.5870% ( 5) 00:10:07.612 14.487 - 14.545: 84.6292% ( 4) 00:10:07.612 14.545 - 14.604: 84.6925% ( 6) 00:10:07.612 14.604 - 14.662: 84.7241% ( 3) 00:10:07.612 14.662 - 14.720: 84.7874% ( 6) 00:10:07.612 14.720 - 14.778: 84.8191% ( 3) 00:10:07.612 14.778 - 14.836: 84.8613% ( 4) 00:10:07.612 14.836 - 14.895: 84.8824% ( 2) 00:10:07.612 14.895 - 15.011: 84.9351% ( 5) 00:10:07.612 15.011 - 15.127: 85.0195% ( 8) 00:10:07.612 15.127 - 15.244: 85.0617% ( 4) 00:10:07.612 15.244 - 15.360: 85.1672% ( 10) 00:10:07.612 15.360 - 15.476: 85.2305% ( 6) 00:10:07.612 15.476 - 15.593: 85.3149% ( 8) 00:10:07.612 15.593 - 15.709: 85.4732% ( 15) 00:10:07.612 15.709 - 15.825: 85.5364% ( 6) 00:10:07.612 15.825 - 15.942: 85.5892% ( 5) 00:10:07.612 15.942 - 16.058: 85.6841% ( 9) 00:10:07.612 16.058 - 16.175: 85.8002% ( 11) 00:10:07.612 16.175 - 16.291: 85.9584% ( 15) 00:10:07.612 16.291 - 16.407: 86.0217% ( 6) 00:10:07.612 16.407 - 16.524: 86.1167% ( 9) 00:10:07.612 16.524 - 16.640: 86.3066% ( 18) 00:10:07.612 16.640 - 16.756: 86.5492% ( 23) 00:10:07.612 16.756 - 16.873: 86.8130% ( 25) 00:10:07.612 16.873 - 16.989: 86.9817% ( 16) 00:10:07.612 16.989 - 17.105: 87.1611% ( 17) 00:10:07.612 17.105 - 17.222: 87.4037% ( 23) 00:10:07.612 17.222 - 17.338: 87.6675% ( 25) 00:10:07.612 17.338 - 17.455: 87.9418% ( 26) 00:10:07.612 17.455 - 17.571: 88.1211% ( 17) 00:10:07.612 17.571 - 17.687: 88.3110% ( 18) 00:10:07.612 17.687 - 17.804: 88.5642% ( 24) 00:10:07.612 17.804 - 17.920: 88.8385% ( 26) 00:10:07.612 17.920 - 18.036: 88.9545% ( 11) 00:10:07.612 18.036 - 18.153: 89.2710% ( 30) 00:10:07.612 18.153 - 18.269: 89.5453% ( 26) 00:10:07.612 18.269 - 18.385: 89.7458% ( 19) 00:10:07.612 18.385 - 18.502: 89.9778% ( 22) 00:10:07.612 18.502 - 18.618: 90.2205% ( 23) 00:10:07.612 18.618 - 18.735: 90.3998% ( 17) 00:10:07.612 18.735 - 18.851: 90.6108% ( 20) 00:10:07.612 18.851 - 18.967: 90.8218% ( 20) 00:10:07.612 18.967 - 19.084: 91.0539% ( 22) 00:10:07.612 19.084 - 19.200: 91.3071% ( 24) 00:10:07.612 19.200 - 19.316: 91.6236% ( 30) 00:10:07.612 19.316 - 19.433: 91.8451% ( 21) 00:10:07.612 19.433 - 19.549: 92.1300% ( 27) 00:10:07.612 19.549 - 19.665: 92.3621% ( 22) 00:10:07.612 19.665 - 19.782: 92.6891% ( 31) 00:10:07.612 19.782 - 19.898: 92.9950% ( 29) 00:10:07.612 19.898 - 20.015: 93.1955% ( 19) 00:10:07.612 20.015 - 20.131: 93.4381% ( 23) 00:10:07.612 20.131 - 20.247: 93.6175% ( 17) 00:10:07.612 20.247 - 20.364: 94.0289% ( 39) 00:10:07.612 20.364 - 20.480: 94.3137% ( 27) 00:10:07.612 20.480 - 20.596: 94.5986% ( 27) 00:10:07.612 20.596 - 20.713: 94.8940% ( 28) 00:10:07.612 20.713 - 20.829: 95.1261% ( 22) 00:10:07.612 20.829 - 20.945: 95.3476% ( 21) 00:10:07.612 20.945 - 21.062: 95.5797% ( 22) 00:10:07.612 21.062 - 21.178: 95.8962% ( 30) 00:10:07.612 21.178 - 21.295: 96.0439% ( 14) 00:10:07.612 21.295 - 21.411: 96.3076% ( 25) 00:10:07.612 21.411 - 21.527: 96.5292% ( 21) 00:10:07.612 21.527 - 21.644: 96.7085% ( 17) 00:10:07.612 21.644 - 21.760: 96.8773% ( 16) 00:10:07.612 21.760 - 21.876: 97.0567% ( 17) 00:10:07.612 21.876 - 21.993: 97.1938% ( 13) 00:10:07.612 21.993 - 22.109: 97.3626% ( 16) 00:10:07.612 22.109 - 22.225: 97.5525% ( 18) 00:10:07.612 22.225 - 22.342: 97.7318% ( 17) 00:10:07.612 22.342 - 22.458: 97.8479% ( 11) 00:10:07.612 22.458 - 22.575: 97.9850% ( 13) 00:10:07.612 22.575 - 22.691: 98.0905% ( 10) 00:10:07.612 22.691 - 22.807: 98.3015% ( 20) 00:10:07.612 22.807 - 22.924: 98.4387% ( 13) 00:10:07.612 22.924 - 23.040: 98.5020% ( 6) 00:10:07.612 23.040 - 23.156: 98.5863% ( 8) 00:10:07.612 23.156 - 23.273: 98.7024% ( 11) 00:10:07.612 23.273 - 23.389: 98.7657% ( 6) 00:10:07.612 23.389 - 23.505: 98.8184% ( 5) 00:10:07.612 23.505 - 23.622: 98.8501% ( 3) 00:10:07.612 23.622 - 23.738: 98.8923% ( 4) 00:10:07.612 23.738 - 23.855: 98.9239% ( 3) 00:10:07.612 23.855 - 23.971: 98.9767% ( 5) 00:10:07.612 23.971 - 24.087: 99.0189% ( 4) 00:10:07.612 24.087 - 24.204: 99.0294% ( 1) 00:10:07.612 24.204 - 24.320: 99.0822% ( 5) 00:10:07.612 24.320 - 24.436: 99.1244% ( 4) 00:10:07.612 24.436 - 24.553: 99.1455% ( 2) 00:10:07.612 24.669 - 24.785: 99.1666% ( 2) 00:10:07.612 24.785 - 24.902: 99.1982% ( 3) 00:10:07.612 24.902 - 25.018: 99.2088% ( 1) 00:10:07.612 25.484 - 25.600: 99.2193% ( 1) 00:10:07.612 25.600 - 25.716: 99.2404% ( 2) 00:10:07.612 25.716 - 25.833: 99.2615% ( 2) 00:10:07.612 25.833 - 25.949: 99.2721% ( 1) 00:10:07.612 25.949 - 26.065: 99.2826% ( 1) 00:10:07.612 26.065 - 26.182: 99.2932% ( 1) 00:10:07.612 26.531 - 26.647: 99.3248% ( 3) 00:10:07.612 26.647 - 26.764: 99.3459% ( 2) 00:10:07.612 26.764 - 26.880: 99.3565% ( 1) 00:10:07.612 26.996 - 27.113: 99.3776% ( 2) 00:10:07.612 27.113 - 27.229: 99.3881% ( 1) 00:10:07.612 27.229 - 27.345: 99.3987% ( 1) 00:10:07.612 27.345 - 27.462: 99.4198% ( 2) 00:10:07.612 27.695 - 27.811: 99.4409% ( 2) 00:10:07.612 27.927 - 28.044: 99.4620% ( 2) 00:10:07.612 28.160 - 28.276: 99.4725% ( 1) 00:10:07.612 28.393 - 28.509: 99.5042% ( 3) 00:10:07.612 28.742 - 28.858: 99.5147% ( 1) 00:10:07.612 28.858 - 28.975: 99.5253% ( 1) 00:10:07.612 28.975 - 29.091: 99.5358% ( 1) 00:10:07.612 29.324 - 29.440: 99.5569% ( 2) 00:10:07.612 29.440 - 29.556: 99.5675% ( 1) 00:10:07.612 29.556 - 29.673: 99.5780% ( 1) 00:10:07.612 29.673 - 29.789: 99.5991% ( 2) 00:10:07.612 29.789 - 30.022: 99.6308% ( 3) 00:10:07.612 30.022 - 30.255: 99.6413% ( 1) 00:10:07.612 30.255 - 30.487: 99.6835% ( 4) 00:10:07.612 30.487 - 30.720: 99.7046% ( 2) 00:10:07.612 30.720 - 30.953: 99.7468% ( 4) 00:10:07.612 31.185 - 31.418: 99.7574% ( 1) 00:10:07.612 31.418 - 31.651: 99.7679% ( 1) 00:10:07.612 31.651 - 31.884: 99.7785% ( 1) 00:10:07.612 31.884 - 32.116: 99.7890% ( 1) 00:10:07.612 32.116 - 32.349: 99.8207% ( 3) 00:10:07.612 32.582 - 32.815: 99.8312% ( 1) 00:10:07.612 32.815 - 33.047: 99.8418% ( 1) 00:10:07.612 33.047 - 33.280: 99.8523% ( 1) 00:10:07.612 33.513 - 33.745: 99.8629% ( 1) 00:10:07.612 34.909 - 35.142: 99.8734% ( 1) 00:10:07.612 35.142 - 35.375: 99.8840% ( 1) 00:10:07.612 35.607 - 35.840: 99.8945% ( 1) 00:10:07.612 37.236 - 37.469: 99.9051% ( 1) 00:10:07.612 37.469 - 37.702: 99.9156% ( 1) 00:10:07.612 39.796 - 40.029: 99.9367% ( 2) 00:10:07.612 40.029 - 40.262: 99.9473% ( 1) 00:10:07.612 40.495 - 40.727: 99.9578% ( 1) 00:10:07.612 77.731 - 78.196: 99.9684% ( 1) 00:10:07.612 104.262 - 104.727: 99.9895% ( 2) 00:10:07.612 202.938 - 203.869: 100.0000% ( 1) 00:10:07.612 00:10:07.612 00:10:07.612 real 0m1.335s 00:10:07.612 user 0m1.129s 00:10:07.612 sys 0m0.146s 00:10:07.612 20:27:01 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.613 20:27:01 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:07.613 ************************************ 00:10:07.613 END TEST nvme_overhead 00:10:07.613 ************************************ 00:10:07.613 20:27:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:07.613 20:27:01 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:07.613 20:27:01 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:07.613 20:27:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.613 20:27:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.613 ************************************ 00:10:07.613 START TEST nvme_arbitration 00:10:07.613 ************************************ 00:10:07.613 20:27:01 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:10.953 Initializing NVMe Controllers 00:10:10.953 Attached to 0000:00:10.0 00:10:10.953 Attached to 0000:00:11.0 00:10:10.953 Attached to 0000:00:13.0 00:10:10.953 Attached to 0000:00:12.0 00:10:10.953 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:10.953 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:10.953 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:10.953 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:10.953 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:10.953 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:10.953 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:10.953 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:10.953 Initialization complete. Launching workers. 00:10:10.953 Starting thread on core 1 with urgent priority queue 00:10:10.953 Starting thread on core 2 with urgent priority queue 00:10:10.953 Starting thread on core 3 with urgent priority queue 00:10:10.953 Starting thread on core 0 with urgent priority queue 00:10:10.953 QEMU NVMe Ctrl (12340 ) core 0: 3669.33 IO/s 27.25 secs/100000 ios 00:10:10.953 QEMU NVMe Ctrl (12342 ) core 0: 3669.33 IO/s 27.25 secs/100000 ios 00:10:10.953 QEMU NVMe Ctrl (12341 ) core 1: 3648.00 IO/s 27.41 secs/100000 ios 00:10:10.953 QEMU NVMe Ctrl (12342 ) core 1: 3648.00 IO/s 27.41 secs/100000 ios 00:10:10.953 QEMU NVMe Ctrl (12343 ) core 2: 3498.67 IO/s 28.58 secs/100000 ios 00:10:10.953 QEMU NVMe Ctrl (12342 ) core 3: 3648.00 IO/s 27.41 secs/100000 ios 00:10:10.953 ======================================================== 00:10:10.953 00:10:10.953 00:10:10.953 real 0m3.315s 00:10:10.953 user 0m9.056s 00:10:10.953 sys 0m0.166s 00:10:10.953 20:27:05 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.953 ************************************ 00:10:10.953 END TEST nvme_arbitration 00:10:10.953 20:27:05 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:10.953 ************************************ 00:10:10.953 20:27:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:10.953 20:27:05 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:10.953 20:27:05 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:10.953 20:27:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.953 20:27:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.953 ************************************ 00:10:10.953 START TEST nvme_single_aen 00:10:10.953 ************************************ 00:10:10.953 20:27:05 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:11.519 Asynchronous Event Request test 00:10:11.519 Attached to 0000:00:10.0 00:10:11.519 Attached to 0000:00:11.0 00:10:11.519 Attached to 0000:00:13.0 00:10:11.519 Attached to 0000:00:12.0 00:10:11.519 Reset controller to setup AER completions for this process 00:10:11.519 Registering asynchronous event callbacks... 00:10:11.519 Getting orig temperature thresholds of all controllers 00:10:11.519 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.519 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.520 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.520 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.520 Setting all controllers temperature threshold low to trigger AER 00:10:11.520 Waiting for all controllers temperature threshold to be set lower 00:10:11.520 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.520 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:11.520 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.520 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:11.520 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.520 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:11.520 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.520 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:11.520 Waiting for all controllers to trigger AER and reset threshold 00:10:11.520 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.520 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.520 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.520 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.520 Cleaning up... 00:10:11.520 00:10:11.520 real 0m0.320s 00:10:11.520 user 0m0.119s 00:10:11.520 sys 0m0.152s 00:10:11.520 20:27:05 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.520 20:27:05 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:11.520 ************************************ 00:10:11.520 END TEST nvme_single_aen 00:10:11.520 ************************************ 00:10:11.520 20:27:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:11.520 20:27:05 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:11.520 20:27:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:11.520 20:27:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.520 20:27:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.520 ************************************ 00:10:11.520 START TEST nvme_doorbell_aers 00:10:11.520 ************************************ 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:11.520 20:27:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:11.779 [2024-07-12 20:27:05.776209] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:21.746 Executing: test_write_invalid_db 00:10:21.746 Waiting for AER completion... 00:10:21.746 Failure: test_write_invalid_db 00:10:21.746 00:10:21.746 Executing: test_invalid_db_write_overflow_sq 00:10:21.746 Waiting for AER completion... 00:10:21.746 Failure: test_invalid_db_write_overflow_sq 00:10:21.746 00:10:21.746 Executing: test_invalid_db_write_overflow_cq 00:10:21.746 Waiting for AER completion... 00:10:21.746 Failure: test_invalid_db_write_overflow_cq 00:10:21.746 00:10:21.746 20:27:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:21.746 20:27:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:21.746 [2024-07-12 20:27:15.858035] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:31.807 Executing: test_write_invalid_db 00:10:31.807 Waiting for AER completion... 00:10:31.807 Failure: test_write_invalid_db 00:10:31.807 00:10:31.807 Executing: test_invalid_db_write_overflow_sq 00:10:31.807 Waiting for AER completion... 00:10:31.807 Failure: test_invalid_db_write_overflow_sq 00:10:31.807 00:10:31.807 Executing: test_invalid_db_write_overflow_cq 00:10:31.807 Waiting for AER completion... 00:10:31.807 Failure: test_invalid_db_write_overflow_cq 00:10:31.807 00:10:31.807 20:27:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:31.807 20:27:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:31.807 [2024-07-12 20:27:25.875104] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:41.772 Executing: test_write_invalid_db 00:10:41.772 Waiting for AER completion... 00:10:41.772 Failure: test_write_invalid_db 00:10:41.772 00:10:41.772 Executing: test_invalid_db_write_overflow_sq 00:10:41.772 Waiting for AER completion... 00:10:41.772 Failure: test_invalid_db_write_overflow_sq 00:10:41.772 00:10:41.772 Executing: test_invalid_db_write_overflow_cq 00:10:41.772 Waiting for AER completion... 00:10:41.772 Failure: test_invalid_db_write_overflow_cq 00:10:41.772 00:10:41.772 20:27:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:41.772 20:27:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:41.772 [2024-07-12 20:27:35.894648] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.740 Executing: test_write_invalid_db 00:10:51.740 Waiting for AER completion... 00:10:51.740 Failure: test_write_invalid_db 00:10:51.740 00:10:51.740 Executing: test_invalid_db_write_overflow_sq 00:10:51.740 Waiting for AER completion... 00:10:51.740 Failure: test_invalid_db_write_overflow_sq 00:10:51.740 00:10:51.740 Executing: test_invalid_db_write_overflow_cq 00:10:51.740 Waiting for AER completion... 00:10:51.740 Failure: test_invalid_db_write_overflow_cq 00:10:51.740 00:10:51.740 00:10:51.740 real 0m40.245s 00:10:51.740 user 0m33.979s 00:10:51.740 sys 0m5.865s 00:10:51.740 20:27:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.740 ************************************ 00:10:51.740 END TEST nvme_doorbell_aers 00:10:51.740 ************************************ 00:10:51.740 20:27:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:51.740 20:27:45 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:51.740 20:27:45 nvme -- nvme/nvme.sh@97 -- # uname 00:10:51.740 20:27:45 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:51.741 20:27:45 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:51.741 20:27:45 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:51.741 20:27:45 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.741 20:27:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:51.741 ************************************ 00:10:51.741 START TEST nvme_multi_aen 00:10:51.741 ************************************ 00:10:51.741 20:27:45 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:51.999 [2024-07-12 20:27:45.979639] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.979807] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.979852] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.981815] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.981890] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.981919] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.983800] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.984046] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.984278] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.986037] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.986277] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 [2024-07-12 20:27:45.986491] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82239) is not found. Dropping the request. 00:10:51.999 Child process pid: 82755 00:10:52.257 [Child] Asynchronous Event Request test 00:10:52.257 [Child] Attached to 0000:00:10.0 00:10:52.257 [Child] Attached to 0000:00:11.0 00:10:52.257 [Child] Attached to 0000:00:13.0 00:10:52.257 [Child] Attached to 0000:00:12.0 00:10:52.257 [Child] Registering asynchronous event callbacks... 00:10:52.257 [Child] Getting orig temperature thresholds of all controllers 00:10:52.257 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.257 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.257 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.257 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.257 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:52.257 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.257 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.257 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.257 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.257 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.257 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.257 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.257 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.257 [Child] Cleaning up... 00:10:52.257 Asynchronous Event Request test 00:10:52.257 Attached to 0000:00:10.0 00:10:52.257 Attached to 0000:00:11.0 00:10:52.257 Attached to 0000:00:13.0 00:10:52.257 Attached to 0000:00:12.0 00:10:52.257 Reset controller to setup AER completions for this process 00:10:52.257 Registering asynchronous event callbacks... 00:10:52.257 Getting orig temperature thresholds of all controllers 00:10:52.257 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.257 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.258 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.258 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.258 Setting all controllers temperature threshold low to trigger AER 00:10:52.258 Waiting for all controllers temperature threshold to be set lower 00:10:52.258 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.258 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:52.258 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.258 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:52.258 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.258 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:52.258 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.258 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:52.258 Waiting for all controllers to trigger AER and reset threshold 00:10:52.258 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.258 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.258 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.258 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.258 Cleaning up... 00:10:52.258 00:10:52.258 real 0m0.570s 00:10:52.258 user 0m0.205s 00:10:52.258 sys 0m0.245s 00:10:52.258 20:27:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.258 20:27:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:52.258 ************************************ 00:10:52.258 END TEST nvme_multi_aen 00:10:52.258 ************************************ 00:10:52.258 20:27:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:52.258 20:27:46 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:52.258 20:27:46 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:52.258 20:27:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.258 20:27:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.258 ************************************ 00:10:52.258 START TEST nvme_startup 00:10:52.258 ************************************ 00:10:52.258 20:27:46 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:52.515 Initializing NVMe Controllers 00:10:52.515 Attached to 0000:00:10.0 00:10:52.515 Attached to 0000:00:11.0 00:10:52.515 Attached to 0000:00:13.0 00:10:52.515 Attached to 0000:00:12.0 00:10:52.515 Initialization complete. 00:10:52.515 Time used:191479.562 (us). 00:10:52.515 00:10:52.515 real 0m0.281s 00:10:52.515 user 0m0.103s 00:10:52.515 sys 0m0.141s 00:10:52.515 ************************************ 00:10:52.515 END TEST nvme_startup 00:10:52.515 ************************************ 00:10:52.515 20:27:46 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.515 20:27:46 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:52.773 20:27:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:52.773 20:27:46 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:52.773 20:27:46 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.773 20:27:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.773 20:27:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.773 ************************************ 00:10:52.773 START TEST nvme_multi_secondary 00:10:52.773 ************************************ 00:10:52.773 20:27:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:10:52.773 20:27:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=82811 00:10:52.773 20:27:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:52.773 20:27:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=82812 00:10:52.773 20:27:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:52.773 20:27:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:56.055 Initializing NVMe Controllers 00:10:56.055 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:56.055 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:56.055 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:56.055 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:56.055 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:56.055 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:56.055 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:56.055 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:56.055 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:56.055 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:56.055 Initialization complete. Launching workers. 00:10:56.055 ======================================================== 00:10:56.055 Latency(us) 00:10:56.055 Device Information : IOPS MiB/s Average min max 00:10:56.055 PCIE (0000:00:10.0) NSID 1 from core 1: 5593.22 21.85 2858.59 966.82 10810.82 00:10:56.055 PCIE (0000:00:11.0) NSID 1 from core 1: 5593.22 21.85 2860.34 952.43 10836.69 00:10:56.055 PCIE (0000:00:13.0) NSID 1 from core 1: 5593.22 21.85 2860.64 947.89 10387.09 00:10:56.055 PCIE (0000:00:12.0) NSID 1 from core 1: 5593.22 21.85 2860.79 946.46 10492.73 00:10:56.055 PCIE (0000:00:12.0) NSID 2 from core 1: 5593.22 21.85 2861.24 947.55 10541.17 00:10:56.055 PCIE (0000:00:12.0) NSID 3 from core 1: 5593.22 21.85 2861.23 964.51 11035.02 00:10:56.055 ======================================================== 00:10:56.055 Total : 33559.31 131.09 2860.47 946.46 11035.02 00:10:56.055 00:10:56.055 Initializing NVMe Controllers 00:10:56.055 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:56.055 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:56.055 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:56.055 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:56.055 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:56.055 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:56.055 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:56.055 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:56.055 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:56.055 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:56.055 Initialization complete. Launching workers. 00:10:56.055 ======================================================== 00:10:56.055 Latency(us) 00:10:56.055 Device Information : IOPS MiB/s Average min max 00:10:56.055 PCIE (0000:00:10.0) NSID 1 from core 2: 2503.26 9.78 6388.58 1439.23 13648.11 00:10:56.055 PCIE (0000:00:11.0) NSID 1 from core 2: 2503.26 9.78 6390.63 1401.49 13458.67 00:10:56.055 PCIE (0000:00:13.0) NSID 1 from core 2: 2503.26 9.78 6386.87 1326.68 13577.63 00:10:56.055 PCIE (0000:00:12.0) NSID 1 from core 2: 2503.26 9.78 6381.78 1544.20 13254.74 00:10:56.055 PCIE (0000:00:12.0) NSID 2 from core 2: 2503.26 9.78 6381.20 1568.03 13347.12 00:10:56.055 PCIE (0000:00:12.0) NSID 3 from core 2: 2503.26 9.78 6382.51 1437.03 13063.33 00:10:56.055 ======================================================== 00:10:56.055 Total : 15019.55 58.67 6385.26 1326.68 13648.11 00:10:56.055 00:10:56.055 20:27:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 82811 00:10:58.585 Initializing NVMe Controllers 00:10:58.585 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:58.585 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:58.585 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:58.585 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:58.585 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:58.585 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:58.585 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:58.585 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:58.585 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:58.585 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:58.585 Initialization complete. Launching workers. 00:10:58.585 ======================================================== 00:10:58.585 Latency(us) 00:10:58.585 Device Information : IOPS MiB/s Average min max 00:10:58.585 PCIE (0000:00:10.0) NSID 1 from core 0: 8360.53 32.66 1912.04 907.70 7034.16 00:10:58.585 PCIE (0000:00:11.0) NSID 1 from core 0: 8360.53 32.66 1913.22 935.39 6125.99 00:10:58.585 PCIE (0000:00:13.0) NSID 1 from core 0: 8360.53 32.66 1913.12 849.89 6265.72 00:10:58.585 PCIE (0000:00:12.0) NSID 1 from core 0: 8360.33 32.66 1913.06 748.88 6189.19 00:10:58.585 PCIE (0000:00:12.0) NSID 2 from core 0: 8360.53 32.66 1912.90 652.04 6100.11 00:10:58.585 PCIE (0000:00:12.0) NSID 3 from core 0: 8360.53 32.66 1912.77 555.64 6314.35 00:10:58.585 ======================================================== 00:10:58.585 Total : 50163.01 195.95 1912.85 555.64 7034.16 00:10:58.585 00:10:58.585 20:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 82812 00:10:58.585 20:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=82883 00:10:58.585 20:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:58.585 20:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:58.585 20:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=82886 00:10:58.585 20:27:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:01.925 Initializing NVMe Controllers 00:11:01.925 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:01.925 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:01.925 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:01.925 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:01.925 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:01.925 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:01.925 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:01.925 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:01.925 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:01.925 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:01.925 Initialization complete. Launching workers. 00:11:01.925 ======================================================== 00:11:01.925 Latency(us) 00:11:01.925 Device Information : IOPS MiB/s Average min max 00:11:01.925 PCIE (0000:00:10.0) NSID 1 from core 0: 5320.94 20.78 3004.99 942.77 9009.13 00:11:01.925 PCIE (0000:00:11.0) NSID 1 from core 0: 5320.94 20.78 3006.54 963.30 8683.85 00:11:01.925 PCIE (0000:00:13.0) NSID 1 from core 0: 5320.94 20.78 3006.55 945.40 7790.70 00:11:01.925 PCIE (0000:00:12.0) NSID 1 from core 0: 5320.94 20.78 3006.47 943.45 7287.18 00:11:01.925 PCIE (0000:00:12.0) NSID 2 from core 0: 5320.94 20.78 3006.37 949.86 8004.40 00:11:01.925 PCIE (0000:00:12.0) NSID 3 from core 0: 5326.27 20.81 3003.26 918.74 8578.78 00:11:01.925 ======================================================== 00:11:01.925 Total : 31930.98 124.73 3005.70 918.74 9009.13 00:11:01.925 00:11:01.925 Initializing NVMe Controllers 00:11:01.925 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:01.925 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:01.925 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:01.925 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:01.925 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:01.925 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:01.925 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:01.925 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:01.925 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:01.925 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:01.925 Initialization complete. Launching workers. 00:11:01.925 ======================================================== 00:11:01.925 Latency(us) 00:11:01.925 Device Information : IOPS MiB/s Average min max 00:11:01.925 PCIE (0000:00:10.0) NSID 1 from core 1: 5142.50 20.09 3109.21 1385.65 10911.62 00:11:01.925 PCIE (0000:00:11.0) NSID 1 from core 1: 5142.50 20.09 3110.75 1442.29 11267.10 00:11:01.925 PCIE (0000:00:13.0) NSID 1 from core 1: 5142.50 20.09 3110.67 1382.03 11393.93 00:11:01.925 PCIE (0000:00:12.0) NSID 1 from core 1: 5142.50 20.09 3110.57 1391.24 10863.52 00:11:01.925 PCIE (0000:00:12.0) NSID 2 from core 1: 5142.50 20.09 3110.49 1363.85 10683.48 00:11:01.925 PCIE (0000:00:12.0) NSID 3 from core 1: 5142.50 20.09 3110.38 1037.36 10530.00 00:11:01.925 ======================================================== 00:11:01.925 Total : 30854.98 120.53 3110.34 1037.36 11393.93 00:11:01.925 00:11:03.826 Initializing NVMe Controllers 00:11:03.826 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:03.826 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:03.826 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:03.826 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:03.826 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:03.826 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:03.826 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:03.826 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:03.826 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:03.826 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:03.826 Initialization complete. Launching workers. 00:11:03.826 ======================================================== 00:11:03.826 Latency(us) 00:11:03.826 Device Information : IOPS MiB/s Average min max 00:11:03.826 PCIE (0000:00:10.0) NSID 1 from core 2: 3567.18 13.93 4482.42 1102.61 15172.75 00:11:03.826 PCIE (0000:00:11.0) NSID 1 from core 2: 3567.18 13.93 4484.73 1060.33 13862.38 00:11:03.826 PCIE (0000:00:13.0) NSID 1 from core 2: 3567.18 13.93 4484.53 1024.26 13973.77 00:11:03.826 PCIE (0000:00:12.0) NSID 1 from core 2: 3567.18 13.93 4480.71 886.68 15713.48 00:11:03.826 PCIE (0000:00:12.0) NSID 2 from core 2: 3567.18 13.93 4480.02 742.94 16136.86 00:11:03.826 PCIE (0000:00:12.0) NSID 3 from core 2: 3567.18 13.93 4480.23 555.06 16522.15 00:11:03.826 ======================================================== 00:11:03.826 Total : 21403.09 83.61 4482.11 555.06 16522.15 00:11:03.826 00:11:03.826 ************************************ 00:11:03.826 END TEST nvme_multi_secondary 00:11:03.826 ************************************ 00:11:03.826 20:27:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 82883 00:11:03.826 20:27:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 82886 00:11:03.826 00:11:03.826 real 0m11.087s 00:11:03.826 user 0m18.456s 00:11:03.826 sys 0m0.860s 00:11:03.826 20:27:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.826 20:27:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:03.826 20:27:57 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:03.826 20:27:57 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:03.826 20:27:57 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:03.826 20:27:57 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/81825 ]] 00:11:03.826 20:27:57 nvme -- common/autotest_common.sh@1088 -- # kill 81825 00:11:03.826 20:27:57 nvme -- common/autotest_common.sh@1089 -- # wait 81825 00:11:03.826 [2024-07-12 20:27:57.810326] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.810958] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.811020] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.811043] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.811910] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.811965] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.811992] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.812012] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.812951] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.813001] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.813032] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.813052] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.813986] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.814034] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.814060] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.826 [2024-07-12 20:27:57.814080] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82754) is not found. Dropping the request. 00:11:03.827 [2024-07-12 20:27:57.912510] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:03.827 20:27:57 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:11:03.827 20:27:57 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:11:03.827 20:27:57 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:03.827 20:27:57 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:03.827 20:27:57 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.827 20:27:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:03.827 ************************************ 00:11:03.827 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:03.827 ************************************ 00:11:03.827 20:27:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:04.178 * Looking for test storage... 00:11:04.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:04.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=83041 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 83041 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 83041 ']' 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.178 20:27:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:04.178 [2024-07-12 20:27:58.202424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:04.178 [2024-07-12 20:27:58.202622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83041 ] 00:11:04.436 [2024-07-12 20:27:58.375934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:04.436 [2024-07-12 20:27:58.397213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.436 [2024-07-12 20:27:58.497776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.436 [2024-07-12 20:27:58.497930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.436 [2024-07-12 20:27:58.498073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.436 [2024-07-12 20:27:58.498144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:05.371 nvme0n1 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_FATlA.txt 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:05.371 true 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.371 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:05.372 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720816079 00:11:05.372 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=83064 00:11:05.372 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:05.372 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:05.372 20:27:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:07.274 [2024-07-12 20:28:01.254504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:07.274 [2024-07-12 20:28:01.255015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:07.274 [2024-07-12 20:28:01.255192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:07.274 [2024-07-12 20:28:01.255370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.274 [2024-07-12 20:28:01.257483] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:07.274 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 83064 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 83064 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 83064 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_FATlA.txt 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_FATlA.txt 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 83041 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 83041 ']' 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 83041 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83041 00:11:07.274 killing process with pid 83041 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83041' 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 83041 00:11:07.274 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 83041 00:11:07.840 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:07.840 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:07.840 ************************************ 00:11:07.840 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:07.840 ************************************ 00:11:07.840 00:11:07.840 real 0m3.892s 00:11:07.840 user 0m13.573s 00:11:07.840 sys 0m0.668s 00:11:07.840 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.840 20:28:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:07.840 20:28:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:07.840 20:28:01 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:07.840 20:28:01 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:07.840 20:28:01 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:07.840 20:28:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.840 20:28:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:07.840 ************************************ 00:11:07.840 START TEST nvme_fio 00:11:07.840 ************************************ 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:07.840 20:28:01 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:07.840 20:28:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:08.098 20:28:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:08.098 20:28:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:08.356 20:28:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:08.356 20:28:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:08.356 20:28:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:08.614 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:08.614 fio-3.35 00:11:08.614 Starting 1 thread 00:11:11.896 00:11:11.896 test: (groupid=0, jobs=1): err= 0: pid=83195: Fri Jul 12 20:28:05 2024 00:11:11.896 read: IOPS=15.2k, BW=59.2MiB/s (62.1MB/s)(118MiB/2001msec) 00:11:11.896 slat (nsec): min=4757, max=53006, avg=7193.93, stdev=2078.80 00:11:11.896 clat (usec): min=324, max=11030, avg=4201.07, stdev=551.55 00:11:11.896 lat (usec): min=329, max=11083, avg=4208.26, stdev=552.32 00:11:11.896 clat percentiles (usec): 00:11:11.896 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3752], 00:11:11.896 | 30.00th=[ 3884], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4359], 00:11:11.896 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:11:11.896 | 99.00th=[ 6390], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 9110], 00:11:11.896 | 99.99th=[10814] 00:11:11.896 bw ( KiB/s): min=59872, max=61904, per=100.00%, avg=60754.67, stdev=1041.92, samples=3 00:11:11.896 iops : min=14968, max=15476, avg=15188.67, stdev=260.48, samples=3 00:11:11.896 write: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(119MiB/2001msec); 0 zone resets 00:11:11.896 slat (nsec): min=4818, max=63313, avg=7262.28, stdev=2095.66 00:11:11.896 clat (usec): min=277, max=10885, avg=4206.88, stdev=551.75 00:11:11.896 lat (usec): min=283, max=10914, avg=4214.14, stdev=552.51 00:11:11.896 clat percentiles (usec): 00:11:11.896 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3785], 00:11:11.896 | 30.00th=[ 3916], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4359], 00:11:11.896 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:11:11.896 | 99.00th=[ 6325], 99.50th=[ 6980], 99.90th=[ 8029], 99.95th=[ 9372], 00:11:11.896 | 99.99th=[10552] 00:11:11.896 bw ( KiB/s): min=59944, max=61040, per=99.41%, avg=60397.33, stdev=572.00, samples=3 00:11:11.896 iops : min=14986, max=15260, avg=15099.33, stdev=143.00, samples=3 00:11:11.896 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:11.896 lat (msec) : 2=0.05%, 4=34.49%, 10=65.38%, 20=0.04% 00:11:11.896 cpu : usr=99.00%, sys=0.10%, ctx=4, majf=0, minf=625 00:11:11.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:11.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.897 issued rwts: total=30333,30392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.897 00:11:11.897 Run status group 0 (all jobs): 00:11:11.897 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=118MiB (124MB), run=2001-2001msec 00:11:11.897 WRITE: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=119MiB (124MB), run=2001-2001msec 00:11:11.897 ----------------------------------------------------- 00:11:11.897 Suppressions used: 00:11:11.897 count bytes template 00:11:11.897 1 32 /usr/src/fio/parse.c 00:11:11.897 1 8 libtcmalloc_minimal.so 00:11:11.897 ----------------------------------------------------- 00:11:11.897 00:11:11.897 20:28:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:11.897 20:28:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:11.897 20:28:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:11.897 20:28:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:12.155 20:28:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:12.155 20:28:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:12.413 20:28:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:12.413 20:28:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:12.413 20:28:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:12.670 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:12.670 fio-3.35 00:11:12.670 Starting 1 thread 00:11:15.953 00:11:15.953 test: (groupid=0, jobs=1): err= 0: pid=83260: Fri Jul 12 20:28:09 2024 00:11:15.953 read: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(127MiB/2001msec) 00:11:15.953 slat (nsec): min=4713, max=52169, avg=6626.13, stdev=1910.85 00:11:15.953 clat (usec): min=386, max=11090, avg=3908.94, stdev=695.51 00:11:15.953 lat (usec): min=393, max=11142, avg=3915.57, stdev=696.43 00:11:15.953 clat percentiles (usec): 00:11:15.953 | 1.00th=[ 2868], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3425], 00:11:15.953 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 4015], 00:11:15.953 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4686], 00:11:15.953 | 99.00th=[ 6521], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9634], 00:11:15.953 | 99.99th=[10945] 00:11:15.953 bw ( KiB/s): min=63760, max=66768, per=100.00%, avg=65552.00, stdev=1584.57, samples=3 00:11:15.953 iops : min=15940, max=16692, avg=16388.00, stdev=396.14, samples=3 00:11:15.953 write: IOPS=16.3k, BW=63.8MiB/s (66.9MB/s)(128MiB/2001msec); 0 zone resets 00:11:15.953 slat (nsec): min=4887, max=48008, avg=6762.72, stdev=1994.28 00:11:15.953 clat (usec): min=296, max=11001, avg=3915.41, stdev=689.66 00:11:15.953 lat (usec): min=303, max=11019, avg=3922.17, stdev=690.59 00:11:15.953 clat percentiles (usec): 00:11:15.953 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:11:15.953 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 4047], 00:11:15.953 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4686], 00:11:15.953 | 99.00th=[ 6456], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9372], 00:11:15.953 | 99.99th=[10814] 00:11:15.953 bw ( KiB/s): min=63584, max=66920, per=100.00%, avg=65381.33, stdev=1682.98, samples=3 00:11:15.953 iops : min=15896, max=16730, avg=16345.33, stdev=420.74, samples=3 00:11:15.953 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:11:15.953 lat (msec) : 2=0.21%, 4=58.58%, 10=41.14%, 20=0.03% 00:11:15.953 cpu : usr=99.05%, sys=0.15%, ctx=5, majf=0, minf=625 00:11:15.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.953 issued rwts: total=32592,32666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.953 00:11:15.953 Run status group 0 (all jobs): 00:11:15.953 READ: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=127MiB (133MB), run=2001-2001msec 00:11:15.953 WRITE: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=128MiB (134MB), run=2001-2001msec 00:11:15.954 ----------------------------------------------------- 00:11:15.954 Suppressions used: 00:11:15.954 count bytes template 00:11:15.954 1 32 /usr/src/fio/parse.c 00:11:15.954 1 8 libtcmalloc_minimal.so 00:11:15.954 ----------------------------------------------------- 00:11:15.954 00:11:15.954 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:15.954 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:15.954 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:15.954 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:16.212 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:16.213 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:16.472 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:16.472 20:28:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:16.472 20:28:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:16.731 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:16.731 fio-3.35 00:11:16.731 Starting 1 thread 00:11:20.010 00:11:20.010 test: (groupid=0, jobs=1): err= 0: pid=83321: Fri Jul 12 20:28:14 2024 00:11:20.010 read: IOPS=15.6k, BW=61.1MiB/s (64.1MB/s)(122MiB/2001msec) 00:11:20.010 slat (nsec): min=4711, max=44499, avg=6872.25, stdev=2030.36 00:11:20.010 clat (usec): min=228, max=9180, avg=4071.47, stdev=682.46 00:11:20.010 lat (usec): min=234, max=9222, avg=4078.35, stdev=683.44 00:11:20.010 clat percentiles (usec): 00:11:20.010 | 1.00th=[ 3130], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3490], 00:11:20.010 | 30.00th=[ 3589], 40.00th=[ 3916], 50.00th=[ 4113], 60.00th=[ 4228], 00:11:20.010 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5342], 00:11:20.010 | 99.00th=[ 6718], 99.50th=[ 7177], 99.90th=[ 7701], 99.95th=[ 7832], 00:11:20.010 | 99.99th=[ 8979] 00:11:20.010 bw ( KiB/s): min=59472, max=70224, per=100.00%, avg=63306.33, stdev=6002.63, samples=3 00:11:20.010 iops : min=14868, max=17556, avg=15826.33, stdev=1500.85, samples=3 00:11:20.010 write: IOPS=15.7k, BW=61.2MiB/s (64.1MB/s)(122MiB/2001msec); 0 zone resets 00:11:20.010 slat (nsec): min=4839, max=51914, avg=7032.63, stdev=2090.88 00:11:20.010 clat (usec): min=297, max=8993, avg=4080.99, stdev=683.88 00:11:20.010 lat (usec): min=303, max=9006, avg=4088.02, stdev=684.86 00:11:20.010 clat percentiles (usec): 00:11:20.010 | 1.00th=[ 3130], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:11:20.010 | 30.00th=[ 3621], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4228], 00:11:20.010 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5407], 00:11:20.010 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 7898], 00:11:20.010 | 99.99th=[ 8717] 00:11:20.010 bw ( KiB/s): min=58968, max=69392, per=100.00%, avg=63002.00, stdev=5597.14, samples=3 00:11:20.010 iops : min=14742, max=17348, avg=15750.33, stdev=1399.39, samples=3 00:11:20.010 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:20.010 lat (msec) : 2=0.06%, 4=41.54%, 10=58.35% 00:11:20.010 cpu : usr=98.90%, sys=0.10%, ctx=3, majf=0, minf=625 00:11:20.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:20.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.010 issued rwts: total=31311,31336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.010 00:11:20.010 Run status group 0 (all jobs): 00:11:20.010 READ: bw=61.1MiB/s (64.1MB/s), 61.1MiB/s-61.1MiB/s (64.1MB/s-64.1MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:20.010 WRITE: bw=61.2MiB/s (64.1MB/s), 61.2MiB/s-61.2MiB/s (64.1MB/s-64.1MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:20.268 ----------------------------------------------------- 00:11:20.268 Suppressions used: 00:11:20.268 count bytes template 00:11:20.268 1 32 /usr/src/fio/parse.c 00:11:20.268 1 8 libtcmalloc_minimal.so 00:11:20.268 ----------------------------------------------------- 00:11:20.268 00:11:20.534 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:20.534 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:20.534 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:20.534 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:20.800 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:20.800 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:20.800 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:20.800 20:28:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:20.800 20:28:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:21.058 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:21.058 fio-3.35 00:11:21.058 Starting 1 thread 00:11:24.343 00:11:24.343 test: (groupid=0, jobs=1): err= 0: pid=83382: Fri Jul 12 20:28:18 2024 00:11:24.343 read: IOPS=15.4k, BW=60.2MiB/s (63.1MB/s)(120MiB/2001msec) 00:11:24.343 slat (usec): min=4, max=2755, avg= 6.82, stdev=15.81 00:11:24.343 clat (usec): min=242, max=11347, avg=4128.89, stdev=920.87 00:11:24.343 lat (usec): min=248, max=11356, avg=4135.71, stdev=922.26 00:11:24.343 clat percentiles (usec): 00:11:24.343 | 1.00th=[ 3032], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:11:24.343 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:24.343 | 70.00th=[ 4015], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5538], 00:11:24.343 | 99.00th=[ 8094], 99.50th=[ 8455], 99.90th=[10290], 99.95th=[10814], 00:11:24.343 | 99.99th=[11338] 00:11:24.343 bw ( KiB/s): min=57504, max=67808, per=100.00%, avg=63570.67, stdev=5390.08, samples=3 00:11:24.343 iops : min=14376, max=16952, avg=15892.67, stdev=1347.52, samples=3 00:11:24.343 write: IOPS=15.4k, BW=60.2MiB/s (63.2MB/s)(121MiB/2001msec); 0 zone resets 00:11:24.343 slat (nsec): min=4870, max=48554, avg=6855.81, stdev=2341.10 00:11:24.343 clat (usec): min=272, max=11523, avg=4146.87, stdev=933.25 00:11:24.343 lat (usec): min=278, max=11529, avg=4153.73, stdev=934.57 00:11:24.343 clat percentiles (usec): 00:11:24.343 | 1.00th=[ 2999], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:11:24.343 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:24.343 | 70.00th=[ 4047], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5604], 00:11:24.343 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[10290], 99.95th=[10683], 00:11:24.343 | 99.99th=[11338] 00:11:24.343 bw ( KiB/s): min=56704, max=67280, per=100.00%, avg=63285.33, stdev=5742.92, samples=3 00:11:24.343 iops : min=14176, max=16820, avg=15821.33, stdev=1435.73, samples=3 00:11:24.343 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:24.343 lat (msec) : 2=0.12%, 4=68.37%, 10=31.33%, 20=0.14% 00:11:24.343 cpu : usr=98.65%, sys=0.20%, ctx=10, majf=0, minf=622 00:11:24.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:24.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.343 issued rwts: total=30837,30860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.343 00:11:24.343 Run status group 0 (all jobs): 00:11:24.343 READ: bw=60.2MiB/s (63.1MB/s), 60.2MiB/s-60.2MiB/s (63.1MB/s-63.1MB/s), io=120MiB (126MB), run=2001-2001msec 00:11:24.343 WRITE: bw=60.2MiB/s (63.2MB/s), 60.2MiB/s-60.2MiB/s (63.2MB/s-63.2MB/s), io=121MiB (126MB), run=2001-2001msec 00:11:24.601 ----------------------------------------------------- 00:11:24.601 Suppressions used: 00:11:24.601 count bytes template 00:11:24.601 1 32 /usr/src/fio/parse.c 00:11:24.601 1 8 libtcmalloc_minimal.so 00:11:24.601 ----------------------------------------------------- 00:11:24.601 00:11:24.601 20:28:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:24.601 20:28:18 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:24.601 00:11:24.601 real 0m16.788s 00:11:24.601 user 0m13.497s 00:11:24.601 sys 0m1.895s 00:11:24.601 20:28:18 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.601 ************************************ 00:11:24.601 END TEST nvme_fio 00:11:24.601 ************************************ 00:11:24.601 20:28:18 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:24.601 20:28:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:24.601 ************************************ 00:11:24.601 END TEST nvme 00:11:24.601 ************************************ 00:11:24.601 00:11:24.601 real 1m28.164s 00:11:24.601 user 3m35.985s 00:11:24.601 sys 0m14.624s 00:11:24.601 20:28:18 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.601 20:28:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:24.601 20:28:18 -- common/autotest_common.sh@1142 -- # return 0 00:11:24.601 20:28:18 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:24.601 20:28:18 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:24.601 20:28:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:24.601 20:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.601 20:28:18 -- common/autotest_common.sh@10 -- # set +x 00:11:24.860 ************************************ 00:11:24.860 START TEST nvme_scc 00:11:24.860 ************************************ 00:11:24.860 20:28:18 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:24.860 * Looking for test storage... 00:11:24.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.860 20:28:18 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.860 20:28:18 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.860 20:28:18 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.860 20:28:18 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.860 20:28:18 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.860 20:28:18 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.860 20:28:18 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.860 20:28:18 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:24.860 20:28:18 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:24.860 20:28:18 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:24.860 20:28:18 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.860 20:28:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:24.860 20:28:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:24.860 20:28:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:24.860 20:28:18 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:25.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.377 Waiting for block devices as requested 00:11:25.377 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.377 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.689 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.689 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:30.960 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:30.960 20:28:24 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:30.960 20:28:24 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:30.960 20:28:24 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:30.960 20:28:24 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:30.960 20:28:24 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.960 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.961 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:30.962 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.963 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:30.964 20:28:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:30.964 20:28:24 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:30.964 20:28:24 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:30.965 20:28:24 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:30.965 20:28:24 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:30.965 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:30.966 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:30.967 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.968 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:30.969 20:28:24 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:30.969 20:28:24 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:30.969 20:28:24 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:30.969 20:28:24 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.969 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:30.970 20:28:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.970 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.970 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.970 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.971 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:30.972 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:30.973 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:30.974 20:28:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.235 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:31.236 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.237 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.238 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:31.239 20:28:25 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:31.239 20:28:25 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:31.239 20:28:25 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.239 20:28:25 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.239 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.240 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.241 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:31.242 20:28:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:31.242 20:28:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:31.243 20:28:25 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:31.243 20:28:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:31.243 20:28:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:31.243 20:28:25 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:31.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:32.376 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.376 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.376 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.376 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.376 20:28:26 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:32.376 20:28:26 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:32.376 20:28:26 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.376 20:28:26 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:32.376 ************************************ 00:11:32.376 START TEST nvme_simple_copy 00:11:32.376 ************************************ 00:11:32.376 20:28:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:32.635 Initializing NVMe Controllers 00:11:32.635 Attaching to 0000:00:10.0 00:11:32.635 Controller supports SCC. Attached to 0000:00:10.0 00:11:32.635 Namespace ID: 1 size: 6GB 00:11:32.635 Initialization complete. 00:11:32.635 00:11:32.635 Controller QEMU NVMe Ctrl (12340 ) 00:11:32.635 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:32.635 Namespace Block Size:4096 00:11:32.635 Writing LBAs 0 to 63 with Random Data 00:11:32.635 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:32.635 LBAs matching Written Data: 64 00:11:32.635 00:11:32.635 real 0m0.297s 00:11:32.635 user 0m0.106s 00:11:32.635 sys 0m0.089s 00:11:32.635 20:28:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.635 ************************************ 00:11:32.635 END TEST nvme_simple_copy 00:11:32.635 ************************************ 00:11:32.635 20:28:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 20:28:26 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:11:32.893 ************************************ 00:11:32.893 END TEST nvme_scc 00:11:32.893 ************************************ 00:11:32.893 00:11:32.893 real 0m8.063s 00:11:32.893 user 0m1.308s 00:11:32.893 sys 0m1.615s 00:11:32.893 20:28:26 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.893 20:28:26 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 20:28:26 -- common/autotest_common.sh@1142 -- # return 0 00:11:32.893 20:28:26 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:32.893 20:28:26 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:32.893 20:28:26 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:32.893 20:28:26 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:32.893 20:28:26 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:32.893 20:28:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:32.893 20:28:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.893 20:28:26 -- common/autotest_common.sh@10 -- # set +x 00:11:32.893 ************************************ 00:11:32.893 START TEST nvme_fdp 00:11:32.893 ************************************ 00:11:32.893 20:28:26 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:11:32.893 * Looking for test storage... 00:11:32.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:32.893 20:28:26 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:32.893 20:28:26 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:32.893 20:28:26 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:32.893 20:28:26 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:32.893 20:28:26 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.893 20:28:26 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.893 20:28:26 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.893 20:28:26 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.893 20:28:26 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.893 20:28:26 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.893 20:28:26 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.893 20:28:26 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:32.894 20:28:26 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:32.894 20:28:26 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:32.894 20:28:26 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.894 20:28:26 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:33.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.411 Waiting for block devices as requested 00:11:33.411 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.669 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.669 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.669 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:38.938 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:38.938 20:28:32 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:38.938 20:28:32 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:38.938 20:28:32 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:38.938 20:28:32 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.938 20:28:32 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.938 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:38.939 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.940 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:38.941 20:28:32 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:38.941 20:28:32 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:38.941 20:28:32 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.941 20:28:32 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.941 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.942 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.943 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.944 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:39.207 20:28:33 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:39.207 20:28:33 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:39.207 20:28:33 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.207 20:28:33 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:39.207 20:28:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:39.208 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.209 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:39.210 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.211 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:39.212 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:39.213 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.214 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:39.215 20:28:33 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:39.215 20:28:33 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:39.215 20:28:33 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.215 20:28:33 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.215 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.216 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.217 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.476 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:39.477 20:28:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:39.477 20:28:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:39.478 20:28:33 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:39.478 20:28:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:39.478 20:28:33 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:39.478 20:28:33 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:39.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.300 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.301 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.558 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.558 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.558 20:28:34 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:40.558 20:28:34 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:40.558 20:28:34 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.558 20:28:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:40.558 ************************************ 00:11:40.558 START TEST nvme_flexible_data_placement 00:11:40.558 ************************************ 00:11:40.558 20:28:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:40.816 Initializing NVMe Controllers 00:11:40.816 Attaching to 0000:00:13.0 00:11:40.816 Controller supports FDP Attached to 0000:00:13.0 00:11:40.816 Namespace ID: 1 Endurance Group ID: 1 00:11:40.816 Initialization complete. 00:11:40.816 00:11:40.816 ================================== 00:11:40.816 == FDP tests for Namespace: #01 == 00:11:40.816 ================================== 00:11:40.816 00:11:40.816 Get Feature: FDP: 00:11:40.816 ================= 00:11:40.816 Enabled: Yes 00:11:40.816 FDP configuration Index: 0 00:11:40.816 00:11:40.816 FDP configurations log page 00:11:40.816 =========================== 00:11:40.816 Number of FDP configurations: 1 00:11:40.816 Version: 0 00:11:40.816 Size: 112 00:11:40.816 FDP Configuration Descriptor: 0 00:11:40.816 Descriptor Size: 96 00:11:40.816 Reclaim Group Identifier format: 2 00:11:40.816 FDP Volatile Write Cache: Not Present 00:11:40.816 FDP Configuration: Valid 00:11:40.816 Vendor Specific Size: 0 00:11:40.816 Number of Reclaim Groups: 2 00:11:40.816 Number of Recalim Unit Handles: 8 00:11:40.816 Max Placement Identifiers: 128 00:11:40.816 Number of Namespaces Suppprted: 256 00:11:40.816 Reclaim unit Nominal Size: 6000000 bytes 00:11:40.816 Estimated Reclaim Unit Time Limit: Not Reported 00:11:40.816 RUH Desc #000: RUH Type: Initially Isolated 00:11:40.816 RUH Desc #001: RUH Type: Initially Isolated 00:11:40.816 RUH Desc #002: RUH Type: Initially Isolated 00:11:40.816 RUH Desc #003: RUH Type: Initially Isolated 00:11:40.816 RUH Desc #004: RUH Type: Initially Isolated 00:11:40.816 RUH Desc #005: RUH Type: Initially Isolated 00:11:40.816 RUH Desc #006: RUH Type: Initially Isolated 00:11:40.816 RUH Desc #007: RUH Type: Initially Isolated 00:11:40.816 00:11:40.816 FDP reclaim unit handle usage log page 00:11:40.816 ====================================== 00:11:40.816 Number of Reclaim Unit Handles: 8 00:11:40.816 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:40.816 RUH Usage Desc #001: RUH Attributes: Unused 00:11:40.816 RUH Usage Desc #002: RUH Attributes: Unused 00:11:40.816 RUH Usage Desc #003: RUH Attributes: Unused 00:11:40.816 RUH Usage Desc #004: RUH Attributes: Unused 00:11:40.816 RUH Usage Desc #005: RUH Attributes: Unused 00:11:40.816 RUH Usage Desc #006: RUH Attributes: Unused 00:11:40.816 RUH Usage Desc #007: RUH Attributes: Unused 00:11:40.816 00:11:40.816 FDP statistics log page 00:11:40.816 ======================= 00:11:40.816 Host bytes with metadata written: 1355378688 00:11:40.816 Media bytes with metadata written: 1356038144 00:11:40.816 Media bytes erased: 0 00:11:40.816 00:11:40.816 FDP Reclaim unit handle status 00:11:40.816 ============================== 00:11:40.816 Number of RUHS descriptors: 2 00:11:40.816 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003369 00:11:40.817 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:40.817 00:11:40.817 FDP write on placement id: 0 success 00:11:40.817 00:11:40.817 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:40.817 00:11:40.817 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:40.817 00:11:40.817 Get Feature: FDP Events for Placement handle: #0 00:11:40.817 ======================== 00:11:40.817 Number of FDP Events: 6 00:11:40.817 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:40.817 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:40.817 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:40.817 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:40.817 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:40.817 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:40.817 00:11:40.817 FDP events log page 00:11:40.817 =================== 00:11:40.817 Number of FDP events: 1 00:11:40.817 FDP Event #0: 00:11:40.817 Event Type: RU Not Written to Capacity 00:11:40.817 Placement Identifier: Valid 00:11:40.817 NSID: Valid 00:11:40.817 Location: Valid 00:11:40.817 Placement Identifier: 0 00:11:40.817 Event Timestamp: 4 00:11:40.817 Namespace Identifier: 1 00:11:40.817 Reclaim Group Identifier: 0 00:11:40.817 Reclaim Unit Handle Identifier: 0 00:11:40.817 00:11:40.817 FDP test passed 00:11:40.817 ************************************ 00:11:40.817 END TEST nvme_flexible_data_placement 00:11:40.817 ************************************ 00:11:40.817 00:11:40.817 real 0m0.286s 00:11:40.817 user 0m0.099s 00:11:40.817 sys 0m0.085s 00:11:40.817 20:28:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.817 20:28:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:40.817 20:28:34 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:11:40.817 00:11:40.817 real 0m8.061s 00:11:40.817 user 0m1.259s 00:11:40.817 sys 0m1.700s 00:11:40.817 ************************************ 00:11:40.817 END TEST nvme_fdp 00:11:40.817 ************************************ 00:11:40.817 20:28:34 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.817 20:28:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:40.817 20:28:34 -- common/autotest_common.sh@1142 -- # return 0 00:11:40.817 20:28:34 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:11:40.817 20:28:34 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:40.817 20:28:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:40.817 20:28:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.817 20:28:34 -- common/autotest_common.sh@10 -- # set +x 00:11:41.075 ************************************ 00:11:41.075 START TEST nvme_rpc 00:11:41.075 ************************************ 00:11:41.075 20:28:34 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:41.075 * Looking for test storage... 00:11:41.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:41.075 20:28:35 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.075 20:28:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:41.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.075 20:28:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:41.075 20:28:35 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=84718 00:11:41.075 20:28:35 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:41.075 20:28:35 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:41.075 20:28:35 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 84718 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 84718 ']' 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.075 20:28:35 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.333 [2024-07-12 20:28:35.246911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:41.334 [2024-07-12 20:28:35.247562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84718 ] 00:11:41.334 [2024-07-12 20:28:35.392669] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:41.334 [2024-07-12 20:28:35.411772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.592 [2024-07-12 20:28:35.509191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.592 [2024-07-12 20:28:35.509226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.157 20:28:36 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.157 20:28:36 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:42.157 20:28:36 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:42.723 Nvme0n1 00:11:42.723 20:28:36 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:42.723 20:28:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:42.723 request: 00:11:42.723 { 00:11:42.723 "bdev_name": "Nvme0n1", 00:11:42.723 "filename": "non_existing_file", 00:11:42.723 "method": "bdev_nvme_apply_firmware", 00:11:42.723 "req_id": 1 00:11:42.723 } 00:11:42.723 Got JSON-RPC error response 00:11:42.723 response: 00:11:42.723 { 00:11:42.723 "code": -32603, 00:11:42.723 "message": "open file failed." 00:11:42.723 } 00:11:42.723 20:28:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:42.723 20:28:36 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:42.723 20:28:36 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:42.981 20:28:37 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:42.981 20:28:37 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 84718 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 84718 ']' 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 84718 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84718 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.981 killing process with pid 84718 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84718' 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@967 -- # kill 84718 00:11:42.981 20:28:37 nvme_rpc -- common/autotest_common.sh@972 -- # wait 84718 00:11:43.560 ************************************ 00:11:43.560 END TEST nvme_rpc 00:11:43.560 ************************************ 00:11:43.560 00:11:43.560 real 0m2.575s 00:11:43.560 user 0m5.133s 00:11:43.560 sys 0m0.640s 00:11:43.560 20:28:37 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.560 20:28:37 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.560 20:28:37 -- common/autotest_common.sh@1142 -- # return 0 00:11:43.560 20:28:37 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:43.560 20:28:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:43.560 20:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.560 20:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:43.560 ************************************ 00:11:43.560 START TEST nvme_rpc_timeouts 00:11:43.560 ************************************ 00:11:43.560 20:28:37 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:43.560 * Looking for test storage... 00:11:43.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:43.560 20:28:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.560 20:28:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_84772 00:11:43.560 20:28:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_84772 00:11:43.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.560 20:28:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=84796 00:11:43.560 20:28:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:43.560 20:28:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 84796 00:11:43.560 20:28:37 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 84796 ']' 00:11:43.560 20:28:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:43.560 20:28:37 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.560 20:28:37 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.560 20:28:37 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.560 20:28:37 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.560 20:28:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:43.819 [2024-07-12 20:28:37.771599] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:43.819 [2024-07-12 20:28:37.772156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84796 ] 00:11:43.819 [2024-07-12 20:28:37.917706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:43.819 [2024-07-12 20:28:37.936584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:44.086 [2024-07-12 20:28:38.029159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.087 [2024-07-12 20:28:38.029206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.660 Checking default timeout settings: 00:11:44.660 20:28:38 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.660 20:28:38 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:11:44.660 20:28:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:44.660 20:28:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:44.917 Making settings changes with rpc: 00:11:44.917 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:44.917 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:45.176 Check default vs. modified settings: 00:11:45.176 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:45.176 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_84772 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_84772 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:45.742 Setting action_on_timeout is changed as expected. 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_84772 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_84772 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:45.742 Setting timeout_us is changed as expected. 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_84772 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_84772 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:45.742 Setting timeout_admin_us is changed as expected. 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_84772 /tmp/settings_modified_84772 00:11:45.742 20:28:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 84796 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 84796 ']' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 84796 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84796 00:11:45.742 killing process with pid 84796 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84796' 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 84796 00:11:45.742 20:28:39 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 84796 00:11:46.000 RPC TIMEOUT SETTING TEST PASSED. 00:11:46.000 20:28:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:46.000 00:11:46.000 real 0m2.524s 00:11:46.000 user 0m5.022s 00:11:46.000 sys 0m0.615s 00:11:46.000 ************************************ 00:11:46.000 END TEST nvme_rpc_timeouts 00:11:46.000 ************************************ 00:11:46.000 20:28:40 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.000 20:28:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:46.257 20:28:40 -- common/autotest_common.sh@1142 -- # return 0 00:11:46.257 20:28:40 -- spdk/autotest.sh@243 -- # uname -s 00:11:46.257 20:28:40 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:46.257 20:28:40 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:46.257 20:28:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:46.257 20:28:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.257 20:28:40 -- common/autotest_common.sh@10 -- # set +x 00:11:46.257 ************************************ 00:11:46.257 START TEST sw_hotplug 00:11:46.257 ************************************ 00:11:46.257 20:28:40 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:46.257 * Looking for test storage... 00:11:46.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:46.257 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:46.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.774 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.774 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.774 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.774 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.774 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:46.774 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:46.774 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:46.774 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:46.774 20:28:40 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:46.774 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:46.774 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:46.774 20:28:40 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:47.032 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:47.317 Waiting for block devices as requested 00:11:47.317 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.317 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.574 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.574 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.833 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:52.833 20:28:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:52.833 20:28:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:53.091 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:53.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.091 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:53.349 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:53.606 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.606 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:53.865 20:28:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=85636 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:53.865 20:28:47 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:11:53.865 20:28:47 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:11:53.865 20:28:47 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:11:53.865 20:28:47 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:11:53.865 20:28:47 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:53.865 20:28:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:54.122 Initializing NVMe Controllers 00:11:54.122 Attaching to 0000:00:10.0 00:11:54.122 Attaching to 0000:00:11.0 00:11:54.122 Attached to 0000:00:10.0 00:11:54.122 Attached to 0000:00:11.0 00:11:54.122 Initialization complete. Starting I/O... 00:11:54.122 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:54.122 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:54.122 00:11:55.070 QEMU NVMe Ctrl (12340 ): 1192 I/Os completed (+1192) 00:11:55.070 QEMU NVMe Ctrl (12341 ): 1322 I/Os completed (+1322) 00:11:55.070 00:11:56.004 QEMU NVMe Ctrl (12340 ): 2809 I/Os completed (+1617) 00:11:56.005 QEMU NVMe Ctrl (12341 ): 3077 I/Os completed (+1755) 00:11:56.005 00:11:57.374 QEMU NVMe Ctrl (12340 ): 4732 I/Os completed (+1923) 00:11:57.374 QEMU NVMe Ctrl (12341 ): 5176 I/Os completed (+2099) 00:11:57.374 00:11:58.306 QEMU NVMe Ctrl (12340 ): 6685 I/Os completed (+1953) 00:11:58.306 QEMU NVMe Ctrl (12341 ): 7206 I/Os completed (+2030) 00:11:58.306 00:11:59.238 QEMU NVMe Ctrl (12340 ): 8661 I/Os completed (+1976) 00:11:59.238 QEMU NVMe Ctrl (12341 ): 9252 I/Os completed (+2046) 00:11:59.238 00:11:59.805 20:28:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.805 20:28:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.805 20:28:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.805 [2024-07-12 20:28:53.908600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:59.805 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:59.805 [2024-07-12 20:28:53.910411] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.910603] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.910780] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.910815] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:59.805 [2024-07-12 20:28:53.913086] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.913310] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.913450] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.913483] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 20:28:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.805 20:28:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.805 [2024-07-12 20:28:53.935328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:59.805 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:59.805 [2024-07-12 20:28:53.936882] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.936949] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.936975] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.936998] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:59.805 [2024-07-12 20:28:53.938739] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.938784] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.938809] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 [2024-07-12 20:28:53.938831] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.805 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:59.805 EAL: Scan for (pci) bus failed. 00:11:59.805 20:28:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:00.063 20:28:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.063 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.063 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:00.063 Attaching to 0000:00:10.0 00:12:00.063 Attached to 0000:00:10.0 00:12:00.321 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:00.321 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.321 20:28:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.321 Attaching to 0000:00:11.0 00:12:00.321 Attached to 0000:00:11.0 00:12:01.254 QEMU NVMe Ctrl (12340 ): 1874 I/Os completed (+1874) 00:12:01.254 QEMU NVMe Ctrl (12341 ): 1792 I/Os completed (+1792) 00:12:01.254 00:12:02.188 QEMU NVMe Ctrl (12340 ): 3741 I/Os completed (+1867) 00:12:02.188 QEMU NVMe Ctrl (12341 ): 3869 I/Os completed (+2077) 00:12:02.188 00:12:03.126 QEMU NVMe Ctrl (12340 ): 5580 I/Os completed (+1839) 00:12:03.126 QEMU NVMe Ctrl (12341 ): 5863 I/Os completed (+1994) 00:12:03.126 00:12:04.060 QEMU NVMe Ctrl (12340 ): 7416 I/Os completed (+1836) 00:12:04.060 QEMU NVMe Ctrl (12341 ): 7798 I/Os completed (+1935) 00:12:04.060 00:12:04.994 QEMU NVMe Ctrl (12340 ): 9419 I/Os completed (+2003) 00:12:04.994 QEMU NVMe Ctrl (12341 ): 9858 I/Os completed (+2060) 00:12:04.994 00:12:06.368 QEMU NVMe Ctrl (12340 ): 11243 I/Os completed (+1824) 00:12:06.368 QEMU NVMe Ctrl (12341 ): 11794 I/Os completed (+1936) 00:12:06.368 00:12:07.305 QEMU NVMe Ctrl (12340 ): 13170 I/Os completed (+1927) 00:12:07.305 QEMU NVMe Ctrl (12341 ): 13870 I/Os completed (+2076) 00:12:07.305 00:12:08.238 QEMU NVMe Ctrl (12340 ): 15142 I/Os completed (+1972) 00:12:08.238 QEMU NVMe Ctrl (12341 ): 16028 I/Os completed (+2158) 00:12:08.238 00:12:09.171 QEMU NVMe Ctrl (12340 ): 16954 I/Os completed (+1812) 00:12:09.171 QEMU NVMe Ctrl (12341 ): 18037 I/Os completed (+2009) 00:12:09.171 00:12:10.106 QEMU NVMe Ctrl (12340 ): 18831 I/Os completed (+1877) 00:12:10.106 QEMU NVMe Ctrl (12341 ): 19989 I/Os completed (+1952) 00:12:10.106 00:12:11.041 QEMU NVMe Ctrl (12340 ): 20751 I/Os completed (+1920) 00:12:11.041 QEMU NVMe Ctrl (12341 ): 22087 I/Os completed (+2098) 00:12:11.041 00:12:12.416 QEMU NVMe Ctrl (12340 ): 22645 I/Os completed (+1894) 00:12:12.416 QEMU NVMe Ctrl (12341 ): 24385 I/Os completed (+2298) 00:12:12.416 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.416 [2024-07-12 20:29:06.268407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:12.416 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:12.416 [2024-07-12 20:29:06.270404] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.271304] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.271343] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.271369] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:12.416 [2024-07-12 20:29:06.273494] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.273552] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.273593] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.273628] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.416 [2024-07-12 20:29:06.300489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:12.416 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:12.416 [2024-07-12 20:29:06.302214] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.302404] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.302582] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.302619] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:12.416 [2024-07-12 20:29:06.304528] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.304593] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.304619] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 [2024-07-12 20:29:06.304640] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:12.416 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:12.416 EAL: Scan for (pci) bus failed. 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.416 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:12.416 Attaching to 0000:00:10.0 00:12:12.416 Attached to 0000:00:10.0 00:12:12.674 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:12.674 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.674 20:29:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:12.674 Attaching to 0000:00:11.0 00:12:12.674 Attached to 0000:00:11.0 00:12:13.251 QEMU NVMe Ctrl (12340 ): 1116 I/Os completed (+1116) 00:12:13.251 QEMU NVMe Ctrl (12341 ): 999 I/Os completed (+999) 00:12:13.251 00:12:14.187 QEMU NVMe Ctrl (12340 ): 2821 I/Os completed (+1705) 00:12:14.187 QEMU NVMe Ctrl (12341 ): 2965 I/Os completed (+1966) 00:12:14.187 00:12:15.143 QEMU NVMe Ctrl (12340 ): 4630 I/Os completed (+1809) 00:12:15.143 QEMU NVMe Ctrl (12341 ): 4964 I/Os completed (+1999) 00:12:15.143 00:12:16.079 QEMU NVMe Ctrl (12340 ): 6500 I/Os completed (+1870) 00:12:16.079 QEMU NVMe Ctrl (12341 ): 7059 I/Os completed (+2095) 00:12:16.079 00:12:17.013 QEMU NVMe Ctrl (12340 ): 8423 I/Os completed (+1923) 00:12:17.013 QEMU NVMe Ctrl (12341 ): 9065 I/Os completed (+2006) 00:12:17.013 00:12:18.388 QEMU NVMe Ctrl (12340 ): 10367 I/Os completed (+1944) 00:12:18.388 QEMU NVMe Ctrl (12341 ): 11490 I/Os completed (+2425) 00:12:18.388 00:12:18.998 QEMU NVMe Ctrl (12340 ): 12455 I/Os completed (+2088) 00:12:18.998 QEMU NVMe Ctrl (12341 ): 13522 I/Os completed (+2032) 00:12:18.998 00:12:20.372 QEMU NVMe Ctrl (12340 ): 14702 I/Os completed (+2247) 00:12:20.372 QEMU NVMe Ctrl (12341 ): 16258 I/Os completed (+2736) 00:12:20.372 00:12:21.306 QEMU NVMe Ctrl (12340 ): 16742 I/Os completed (+2040) 00:12:21.306 QEMU NVMe Ctrl (12341 ): 18512 I/Os completed (+2254) 00:12:21.306 00:12:22.241 QEMU NVMe Ctrl (12340 ): 18738 I/Os completed (+1996) 00:12:22.241 QEMU NVMe Ctrl (12341 ): 20934 I/Os completed (+2422) 00:12:22.241 00:12:23.229 QEMU NVMe Ctrl (12340 ): 20464 I/Os completed (+1726) 00:12:23.229 QEMU NVMe Ctrl (12341 ): 22975 I/Os completed (+2041) 00:12:23.229 00:12:24.164 QEMU NVMe Ctrl (12340 ): 22354 I/Os completed (+1890) 00:12:24.164 QEMU NVMe Ctrl (12341 ): 25461 I/Os completed (+2486) 00:12:24.164 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.732 [2024-07-12 20:29:18.602129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:24.732 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:24.732 [2024-07-12 20:29:18.604530] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.605527] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.605579] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.605605] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:24.732 [2024-07-12 20:29:18.607995] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.608052] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.608086] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.608110] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.732 [2024-07-12 20:29:18.636217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:24.732 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:24.732 [2024-07-12 20:29:18.638378] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.638581] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.638796] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.638969] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:24.732 [2024-07-12 20:29:18.641663] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.641874] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.642057] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 [2024-07-12 20:29:18.642260] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.732 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_vendor 00:12:24.732 EAL: Scan for (pci) bus failed. 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.732 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:24.732 Attaching to 0000:00:10.0 00:12:24.732 Attached to 0000:00:10.0 00:12:24.991 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:24.991 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.991 20:29:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:24.991 Attaching to 0000:00:11.0 00:12:24.991 Attached to 0000:00:11.0 00:12:24.991 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:24.991 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:24.991 [2024-07-12 20:29:18.945837] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:37.186 20:29:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:37.186 20:29:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.186 20:29:30 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.03 00:12:37.186 20:29:30 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.03 00:12:37.186 20:29:30 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:12:37.186 20:29:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.03 00:12:37.186 20:29:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.03 2 00:12:37.186 remove_attach_helper took 43.03s to complete (handling 2 nvme drive(s)) 20:29:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 85636 00:12:43.745 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (85636) - No such process 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 85636 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=86187 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:43.745 20:29:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 86187 00:12:43.745 20:29:36 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 86187 ']' 00:12:43.745 20:29:36 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.745 20:29:36 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.745 20:29:36 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.745 20:29:36 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.745 20:29:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.745 [2024-07-12 20:29:37.061915] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:43.745 [2024-07-12 20:29:37.062353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86187 ] 00:12:43.745 [2024-07-12 20:29:37.214359] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:43.745 [2024-07-12 20:29:37.238298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.745 [2024-07-12 20:29:37.335901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.004 20:29:37 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:44.005 20:29:37 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:44.005 20:29:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:50.597 20:29:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.597 20:29:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.597 20:29:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.597 20:29:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.597 20:29:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.597 20:29:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.597 20:29:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.597 20:29:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.597 [2024-07-12 20:29:44.059759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:50.597 [2024-07-12 20:29:44.062716] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.597 [2024-07-12 20:29:44.062775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.597 [2024-07-12 20:29:44.062815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.597 [2024-07-12 20:29:44.062851] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.597 [2024-07-12 20:29:44.062882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.597 [2024-07-12 20:29:44.062917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.597 [2024-07-12 20:29:44.062953] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.597 [2024-07-12 20:29:44.062977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.597 [2024-07-12 20:29:44.063006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.597 [2024-07-12 20:29:44.063032] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.597 [2024-07-12 20:29:44.063058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.597 [2024-07-12 20:29:44.063083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.597 20:29:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.597 20:29:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.597 20:29:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:50.597 20:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:50.855 [2024-07-12 20:29:44.759823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:50.855 [2024-07-12 20:29:44.762851] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.855 [2024-07-12 20:29:44.763047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.855 [2024-07-12 20:29:44.763276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.855 [2024-07-12 20:29:44.763488] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.855 [2024-07-12 20:29:44.763525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.855 [2024-07-12 20:29:44.763557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.855 [2024-07-12 20:29:44.763584] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.855 [2024-07-12 20:29:44.763614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.855 [2024-07-12 20:29:44.763637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.855 [2024-07-12 20:29:44.763670] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.855 [2024-07-12 20:29:44.763694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.855 [2024-07-12 20:29:44.763722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.112 20:29:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.112 20:29:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.112 20:29:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:51.112 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.370 20:29:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.569 20:29:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.569 20:29:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.569 20:29:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.569 [2024-07-12 20:29:57.560024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:03.569 [2024-07-12 20:29:57.563142] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.569 [2024-07-12 20:29:57.563329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.569 [2024-07-12 20:29:57.563511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.569 [2024-07-12 20:29:57.563753] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.569 [2024-07-12 20:29:57.563942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.569 [2024-07-12 20:29:57.563970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.569 [2024-07-12 20:29:57.563992] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.569 [2024-07-12 20:29:57.564007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.569 [2024-07-12 20:29:57.564023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.569 [2024-07-12 20:29:57.564037] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.569 [2024-07-12 20:29:57.564053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.569 [2024-07-12 20:29:57.564067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.569 20:29:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.569 20:29:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.569 20:29:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:03.569 20:29:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:04.136 [2024-07-12 20:29:58.060050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:04.136 [2024-07-12 20:29:58.062731] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.136 [2024-07-12 20:29:58.062789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.136 [2024-07-12 20:29:58.062812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.136 [2024-07-12 20:29:58.062836] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.136 [2024-07-12 20:29:58.062850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.136 [2024-07-12 20:29:58.062866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.136 [2024-07-12 20:29:58.062881] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.136 [2024-07-12 20:29:58.062896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.136 [2024-07-12 20:29:58.062916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.136 [2024-07-12 20:29:58.062932] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.136 [2024-07-12 20:29:58.062945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.136 [2024-07-12 20:29:58.062961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.136 20:29:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.136 20:29:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:04.136 20:29:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:04.136 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.395 20:29:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.624 20:30:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.624 20:30:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.624 20:30:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.624 20:30:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.624 20:30:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.624 20:30:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.624 [2024-07-12 20:30:10.660298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:16.624 20:30:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:16.624 [2024-07-12 20:30:10.663208] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.624 [2024-07-12 20:30:10.663398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.624 [2024-07-12 20:30:10.663577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.624 [2024-07-12 20:30:10.663739] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.624 [2024-07-12 20:30:10.663976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.624 [2024-07-12 20:30:10.664163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.624 [2024-07-12 20:30:10.664388] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.624 [2024-07-12 20:30:10.664557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.624 [2024-07-12 20:30:10.664709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.624 [2024-07-12 20:30:10.664854] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.624 [2024-07-12 20:30:10.665003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.624 [2024-07-12 20:30:10.665169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:17.191 20:30:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.191 20:30:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 20:30:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:17.191 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:17.448 [2024-07-12 20:30:11.360341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:17.448 [2024-07-12 20:30:11.362922] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.448 [2024-07-12 20:30:11.362984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.448 [2024-07-12 20:30:11.363014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.448 [2024-07-12 20:30:11.363039] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.448 [2024-07-12 20:30:11.363054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.448 [2024-07-12 20:30:11.363074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.448 [2024-07-12 20:30:11.363095] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.448 [2024-07-12 20:30:11.363113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.448 [2024-07-12 20:30:11.363127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.448 [2024-07-12 20:30:11.363143] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.448 [2024-07-12 20:30:11.363157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.448 [2024-07-12 20:30:11.363173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:17.707 20:30:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.707 20:30:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:17.707 20:30:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:17.707 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:17.965 20:30:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:17.965 20:30:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:17.965 20:30:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:17.965 20:30:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@715 -- # time=46.15 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@716 -- # echo 46.15 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.15 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.15 2 00:13:30.171 remove_attach_helper took 46.15s to complete (handling 2 nvme drive(s)) 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:30.171 20:30:24 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:30.171 20:30:24 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.731 20:30:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.731 20:30:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:36.731 [2024-07-12 20:30:30.236019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:36.731 20:30:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.731 [2024-07-12 20:30:30.238520] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.238581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.238608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 [2024-07-12 20:30:30.238660] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.238713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.238731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 [2024-07-12 20:30:30.238751] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.238765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.238784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 [2024-07-12 20:30:30.238798] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.238814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.238827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:36.731 [2024-07-12 20:30:30.736036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:36.731 [2024-07-12 20:30:30.738145] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.738362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 [2024-07-12 20:30:30.738423] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.738439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.738456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 [2024-07-12 20:30:30.738480] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.738502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.738516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 [2024-07-12 20:30:30.738533] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.731 [2024-07-12 20:30:30.738546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.731 [2024-07-12 20:30:30.738573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:36.731 20:30:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:36.731 20:30:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:36.731 20:30:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:36.731 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:37.026 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:37.026 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:37.026 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:37.026 20:30:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:37.026 20:30:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:37.026 20:30:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:37.026 20:30:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:37.026 20:30:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:37.026 20:30:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:37.026 20:30:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:37.026 20:30:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.222 20:30:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.222 20:30:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.222 20:30:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.222 20:30:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.222 20:30:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.222 [2024-07-12 20:30:43.236306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:49.222 [2024-07-12 20:30:43.238933] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.222 [2024-07-12 20:30:43.239101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.222 [2024-07-12 20:30:43.239412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.222 [2024-07-12 20:30:43.239562] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.222 [2024-07-12 20:30:43.239676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.222 [2024-07-12 20:30:43.239811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.222 [2024-07-12 20:30:43.240018] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.222 [2024-07-12 20:30:43.240127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.222 [2024-07-12 20:30:43.240283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.222 [2024-07-12 20:30:43.240415] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.222 [2024-07-12 20:30:43.240563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.222 [2024-07-12 20:30:43.240696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.222 20:30:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:49.222 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:49.790 [2024-07-12 20:30:43.636356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:49.790 [2024-07-12 20:30:43.641759] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.790 [2024-07-12 20:30:43.641942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.790 [2024-07-12 20:30:43.642179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.790 [2024-07-12 20:30:43.642468] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.790 [2024-07-12 20:30:43.642503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.790 [2024-07-12 20:30:43.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.790 [2024-07-12 20:30:43.642537] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.790 [2024-07-12 20:30:43.642554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.790 [2024-07-12 20:30:43.642568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.790 [2024-07-12 20:30:43.642584] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.790 [2024-07-12 20:30:43.642600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.790 [2024-07-12 20:30:43.642615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.790 20:30:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.790 20:30:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.790 20:30:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.790 20:30:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:50.049 20:30:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.310 20:30:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.310 20:30:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.310 20:30:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.310 20:30:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.310 20:30:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.310 [2024-07-12 20:30:56.236629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:02.310 [2024-07-12 20:30:56.238526] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.310 [2024-07-12 20:30:56.238699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.310 [2024-07-12 20:30:56.238739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.310 [2024-07-12 20:30:56.238763] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.310 [2024-07-12 20:30:56.238785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.310 [2024-07-12 20:30:56.238799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.310 [2024-07-12 20:30:56.238816] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.310 [2024-07-12 20:30:56.238829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.310 [2024-07-12 20:30:56.238845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.310 [2024-07-12 20:30:56.238859] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.310 [2024-07-12 20:30:56.238874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.310 [2024-07-12 20:30:56.238893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.310 20:30:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:02.310 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:02.618 [2024-07-12 20:30:56.636661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:02.618 [2024-07-12 20:30:56.639384] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.618 [2024-07-12 20:30:56.639473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.618 [2024-07-12 20:30:56.639496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.618 [2024-07-12 20:30:56.639521] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.618 [2024-07-12 20:30:56.639536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.618 [2024-07-12 20:30:56.639553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.618 [2024-07-12 20:30:56.639568] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.618 [2024-07-12 20:30:56.639588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.618 [2024-07-12 20:30:56.639601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.618 [2024-07-12 20:30:56.639618] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.618 [2024-07-12 20:30:56.639631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.618 [2024-07-12 20:30:56.639647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.875 20:30:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.875 20:30:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.875 20:30:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:02.875 20:30:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:03.133 20:30:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.11 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.11 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.11 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.11 2 00:14:15.333 remove_attach_helper took 45.11s to complete (handling 2 nvme drive(s)) 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:15.333 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 86187 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 86187 ']' 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 86187 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86187 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.333 20:31:09 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.333 killing process with pid 86187 00:14:15.334 20:31:09 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86187' 00:14:15.334 20:31:09 sw_hotplug -- common/autotest_common.sh@967 -- # kill 86187 00:14:15.334 20:31:09 sw_hotplug -- common/autotest_common.sh@972 -- # wait 86187 00:14:15.900 20:31:09 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:16.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:16.722 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.722 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:16.722 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:16.722 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:16.722 00:14:16.722 real 2m30.622s 00:14:16.723 user 1m49.937s 00:14:16.723 sys 0m20.511s 00:14:16.723 20:31:10 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:16.723 20:31:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.723 ************************************ 00:14:16.723 END TEST sw_hotplug 00:14:16.723 ************************************ 00:14:16.723 20:31:10 -- common/autotest_common.sh@1142 -- # return 0 00:14:16.723 20:31:10 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:14:16.723 20:31:10 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:16.723 20:31:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:16.723 20:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.723 20:31:10 -- common/autotest_common.sh@10 -- # set +x 00:14:16.723 ************************************ 00:14:16.723 START TEST nvme_xnvme 00:14:16.723 ************************************ 00:14:16.723 20:31:10 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:16.980 * Looking for test storage... 00:14:16.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:16.980 20:31:10 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:16.980 20:31:10 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.980 20:31:10 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.980 20:31:10 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.980 20:31:10 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.980 20:31:10 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.980 20:31:10 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.980 20:31:10 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:16.980 20:31:10 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.980 20:31:10 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:16.980 20:31:10 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:16.980 20:31:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.980 20:31:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:16.980 ************************************ 00:14:16.980 START TEST xnvme_to_malloc_dd_copy 00:14:16.980 ************************************ 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:16.980 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:16.981 20:31:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:16.981 { 00:14:16.981 "subsystems": [ 00:14:16.981 { 00:14:16.981 "subsystem": "bdev", 00:14:16.981 "config": [ 00:14:16.981 { 00:14:16.981 "params": { 00:14:16.981 "block_size": 512, 00:14:16.981 "num_blocks": 2097152, 00:14:16.981 "name": "malloc0" 00:14:16.981 }, 00:14:16.981 "method": "bdev_malloc_create" 00:14:16.981 }, 00:14:16.981 { 00:14:16.981 "params": { 00:14:16.981 "io_mechanism": "libaio", 00:14:16.981 "filename": "/dev/nullb0", 00:14:16.981 "name": "null0" 00:14:16.981 }, 00:14:16.981 "method": "bdev_xnvme_create" 00:14:16.981 }, 00:14:16.981 { 00:14:16.981 "method": "bdev_wait_for_examine" 00:14:16.981 } 00:14:16.981 ] 00:14:16.981 } 00:14:16.981 ] 00:14:16.981 } 00:14:16.981 [2024-07-12 20:31:11.055716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:16.981 [2024-07-12 20:31:11.055903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87535 ] 00:14:17.237 [2024-07-12 20:31:11.207329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:17.238 [2024-07-12 20:31:11.227287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.238 [2024-07-12 20:31:11.313173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.426  Copying: 169/1024 [MB] (169 MBps) Copying: 341/1024 [MB] (171 MBps) Copying: 510/1024 [MB] (169 MBps) Copying: 682/1024 [MB] (172 MBps) Copying: 853/1024 [MB] (171 MBps) Copying: 1024/1024 [MB] (average 171 MBps) 00:14:24.426 00:14:24.426 20:31:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:24.426 20:31:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:24.426 20:31:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:24.426 20:31:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:24.426 { 00:14:24.426 "subsystems": [ 00:14:24.426 { 00:14:24.426 "subsystem": "bdev", 00:14:24.426 "config": [ 00:14:24.426 { 00:14:24.426 "params": { 00:14:24.426 "block_size": 512, 00:14:24.426 "num_blocks": 2097152, 00:14:24.426 "name": "malloc0" 00:14:24.426 }, 00:14:24.426 "method": "bdev_malloc_create" 00:14:24.426 }, 00:14:24.426 { 00:14:24.426 "params": { 00:14:24.426 "io_mechanism": "libaio", 00:14:24.426 "filename": "/dev/nullb0", 00:14:24.426 "name": "null0" 00:14:24.426 }, 00:14:24.426 "method": "bdev_xnvme_create" 00:14:24.426 }, 00:14:24.426 { 00:14:24.426 "method": "bdev_wait_for_examine" 00:14:24.426 } 00:14:24.426 ] 00:14:24.426 } 00:14:24.426 ] 00:14:24.426 } 00:14:24.426 [2024-07-12 20:31:18.491372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:24.426 [2024-07-12 20:31:18.491556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87618 ] 00:14:24.684 [2024-07-12 20:31:18.645853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.684 [2024-07-12 20:31:18.666872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.684 [2024-07-12 20:31:18.748088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.896  Copying: 171/1024 [MB] (171 MBps) Copying: 342/1024 [MB] (171 MBps) Copying: 512/1024 [MB] (170 MBps) Copying: 681/1024 [MB] (168 MBps) Copying: 852/1024 [MB] (171 MBps) Copying: 1022/1024 [MB] (169 MBps) Copying: 1024/1024 [MB] (average 170 MBps) 00:14:31.896 00:14:31.896 20:31:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:31.896 20:31:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:31.896 20:31:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:31.896 20:31:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:31.896 20:31:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:31.896 20:31:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:31.896 { 00:14:31.896 "subsystems": [ 00:14:31.896 { 00:14:31.896 "subsystem": "bdev", 00:14:31.896 "config": [ 00:14:31.896 { 00:14:31.896 "params": { 00:14:31.896 "block_size": 512, 00:14:31.896 "num_blocks": 2097152, 00:14:31.896 "name": "malloc0" 00:14:31.896 }, 00:14:31.896 "method": "bdev_malloc_create" 00:14:31.896 }, 00:14:31.896 { 00:14:31.896 "params": { 00:14:31.896 "io_mechanism": "io_uring", 00:14:31.896 "filename": "/dev/nullb0", 00:14:31.896 "name": "null0" 00:14:31.896 }, 00:14:31.896 "method": "bdev_xnvme_create" 00:14:31.896 }, 00:14:31.896 { 00:14:31.896 "method": "bdev_wait_for_examine" 00:14:31.896 } 00:14:31.896 ] 00:14:31.896 } 00:14:31.896 ] 00:14:31.896 } 00:14:31.896 [2024-07-12 20:31:25.947135] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:31.896 [2024-07-12 20:31:25.947360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87705 ] 00:14:32.155 [2024-07-12 20:31:26.090863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:32.155 [2024-07-12 20:31:26.111100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.155 [2024-07-12 20:31:26.194668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.234  Copying: 177/1024 [MB] (177 MBps) Copying: 355/1024 [MB] (178 MBps) Copying: 532/1024 [MB] (177 MBps) Copying: 711/1024 [MB] (178 MBps) Copying: 887/1024 [MB] (176 MBps) Copying: 1024/1024 [MB] (average 177 MBps) 00:14:39.234 00:14:39.234 20:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:39.234 20:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:39.234 20:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:39.234 20:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:39.234 { 00:14:39.234 "subsystems": [ 00:14:39.234 { 00:14:39.234 "subsystem": "bdev", 00:14:39.234 "config": [ 00:14:39.234 { 00:14:39.234 "params": { 00:14:39.234 "block_size": 512, 00:14:39.234 "num_blocks": 2097152, 00:14:39.234 "name": "malloc0" 00:14:39.234 }, 00:14:39.234 "method": "bdev_malloc_create" 00:14:39.234 }, 00:14:39.234 { 00:14:39.234 "params": { 00:14:39.234 "io_mechanism": "io_uring", 00:14:39.234 "filename": "/dev/nullb0", 00:14:39.234 "name": "null0" 00:14:39.234 }, 00:14:39.234 "method": "bdev_xnvme_create" 00:14:39.234 }, 00:14:39.234 { 00:14:39.234 "method": "bdev_wait_for_examine" 00:14:39.234 } 00:14:39.234 ] 00:14:39.234 } 00:14:39.234 ] 00:14:39.235 } 00:14:39.235 [2024-07-12 20:31:33.129281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:39.235 [2024-07-12 20:31:33.129465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87788 ] 00:14:39.235 [2024-07-12 20:31:33.282697] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:39.235 [2024-07-12 20:31:33.303783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.493 [2024-07-12 20:31:33.391700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.880  Copying: 181/1024 [MB] (181 MBps) Copying: 365/1024 [MB] (183 MBps) Copying: 550/1024 [MB] (184 MBps) Copying: 732/1024 [MB] (182 MBps) Copying: 917/1024 [MB] (184 MBps) Copying: 1024/1024 [MB] (average 183 MBps) 00:14:45.880 00:14:46.139 20:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:46.139 20:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:46.139 00:14:46.139 real 0m29.136s 00:14:46.139 user 0m23.507s 00:14:46.139 sys 0m5.083s 00:14:46.139 20:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.139 ************************************ 00:14:46.139 END TEST xnvme_to_malloc_dd_copy 00:14:46.139 ************************************ 00:14:46.139 20:31:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:46.139 20:31:40 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:46.139 20:31:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:46.139 20:31:40 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:46.139 20:31:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.139 20:31:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.139 ************************************ 00:14:46.139 START TEST xnvme_bdevperf 00:14:46.139 ************************************ 00:14:46.139 20:31:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:14:46.139 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:46.139 20:31:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:46.140 20:31:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:46.140 { 00:14:46.140 "subsystems": [ 00:14:46.140 { 00:14:46.140 "subsystem": "bdev", 00:14:46.140 "config": [ 00:14:46.140 { 00:14:46.140 "params": { 00:14:46.140 "io_mechanism": "libaio", 00:14:46.140 "filename": "/dev/nullb0", 00:14:46.140 "name": "null0" 00:14:46.140 }, 00:14:46.140 "method": "bdev_xnvme_create" 00:14:46.140 }, 00:14:46.140 { 00:14:46.140 "method": "bdev_wait_for_examine" 00:14:46.140 } 00:14:46.140 ] 00:14:46.140 } 00:14:46.140 ] 00:14:46.140 } 00:14:46.140 [2024-07-12 20:31:40.231480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:46.140 [2024-07-12 20:31:40.231645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87893 ] 00:14:46.398 [2024-07-12 20:31:40.375116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:46.398 [2024-07-12 20:31:40.395129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.398 [2024-07-12 20:31:40.477826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.657 Running I/O for 5 seconds... 00:14:51.922 00:14:51.922 Latency(us) 00:14:51.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.922 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:51.922 null0 : 5.00 119057.40 465.07 0.00 0.00 534.20 200.15 752.17 00:14:51.922 =================================================================================================================== 00:14:51.922 Total : 119057.40 465.07 0.00 0.00 534.20 200.15 752.17 00:14:51.922 20:31:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:51.922 20:31:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:51.922 20:31:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:51.922 20:31:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:51.922 20:31:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:51.922 20:31:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.922 { 00:14:51.922 "subsystems": [ 00:14:51.922 { 00:14:51.922 "subsystem": "bdev", 00:14:51.922 "config": [ 00:14:51.922 { 00:14:51.922 "params": { 00:14:51.922 "io_mechanism": "io_uring", 00:14:51.922 "filename": "/dev/nullb0", 00:14:51.922 "name": "null0" 00:14:51.922 }, 00:14:51.922 "method": "bdev_xnvme_create" 00:14:51.922 }, 00:14:51.922 { 00:14:51.922 "method": "bdev_wait_for_examine" 00:14:51.922 } 00:14:51.922 ] 00:14:51.922 } 00:14:51.922 ] 00:14:51.922 } 00:14:51.922 [2024-07-12 20:31:45.977816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:51.922 [2024-07-12 20:31:45.977985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87962 ] 00:14:52.181 [2024-07-12 20:31:46.120634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:52.181 [2024-07-12 20:31:46.141233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.181 [2024-07-12 20:31:46.222137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.181 Running I/O for 5 seconds... 00:14:57.507 00:14:57.507 Latency(us) 00:14:57.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.507 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:57.507 null0 : 5.00 156698.82 612.10 0.00 0.00 405.18 238.31 551.10 00:14:57.507 =================================================================================================================== 00:14:57.507 Total : 156698.82 612.10 0.00 0.00 405.18 238.31 551.10 00:14:57.507 20:31:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:57.507 20:31:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:57.507 00:14:57.507 real 0m11.494s 00:14:57.507 user 0m8.448s 00:14:57.507 sys 0m2.842s 00:14:57.507 20:31:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.507 20:31:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.507 ************************************ 00:14:57.507 END TEST xnvme_bdevperf 00:14:57.507 ************************************ 00:14:57.765 20:31:51 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:57.765 ************************************ 00:14:57.765 END TEST nvme_xnvme 00:14:57.765 ************************************ 00:14:57.765 00:14:57.765 real 0m40.816s 00:14:57.765 user 0m32.028s 00:14:57.765 sys 0m8.032s 00:14:57.765 20:31:51 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.765 20:31:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.765 20:31:51 -- common/autotest_common.sh@1142 -- # return 0 00:14:57.765 20:31:51 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:57.765 20:31:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:57.765 20:31:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.765 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:57.765 ************************************ 00:14:57.765 START TEST blockdev_xnvme 00:14:57.765 ************************************ 00:14:57.765 20:31:51 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:57.765 * Looking for test storage... 00:14:57.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=88091 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:57.765 20:31:51 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 88091 00:14:57.765 20:31:51 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 88091 ']' 00:14:57.765 20:31:51 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.765 20:31:51 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.765 20:31:51 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.765 20:31:51 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.765 20:31:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.022 [2024-07-12 20:31:51.913792] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:58.022 [2024-07-12 20:31:51.913976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88091 ] 00:14:58.022 [2024-07-12 20:31:52.067360] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:58.022 [2024-07-12 20:31:52.088898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.281 [2024-07-12 20:31:52.172811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.876 20:31:52 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.876 20:31:52 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:14:58.876 20:31:52 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:14:58.876 20:31:52 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:14:58.876 20:31:52 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:58.876 20:31:52 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:58.876 20:31:52 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:59.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:59.391 Waiting for block devices as requested 00:14:59.391 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:59.391 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:59.648 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:59.648 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:04.962 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:04.962 20:31:58 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:04.962 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:04.963 nvme0n1 00:15:04.963 nvme1n1 00:15:04.963 nvme2n1 00:15:04.963 nvme2n2 00:15:04.963 nvme2n3 00:15:04.963 nvme3n1 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9b59c1dd-344a-438b-b8af-f30688cf4460"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9b59c1dd-344a-438b-b8af-f30688cf4460",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d9192e85-32b5-4ec2-90a2-c9a32a618722"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d9192e85-32b5-4ec2-90a2-c9a32a618722",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "53bfb587-5a8d-4313-b30b-d5543da957dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "53bfb587-5a8d-4313-b30b-d5543da957dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e065821d-6881-4a81-a06f-73208d8e7889"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e065821d-6881-4a81-a06f-73208d8e7889",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0306990e-c5d8-43b6-bf00-e77ad4a1e536"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0306990e-c5d8-43b6-bf00-e77ad4a1e536",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "77f5a2c6-5531-4f83-8fa2-10a17e563123"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "77f5a2c6-5531-4f83-8fa2-10a17e563123",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:15:04.963 20:31:58 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 88091 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 88091 ']' 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 88091 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.963 20:31:58 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88091 00:15:04.963 20:31:59 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:04.963 20:31:59 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:04.963 killing process with pid 88091 00:15:04.963 20:31:59 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88091' 00:15:04.963 20:31:59 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 88091 00:15:04.963 20:31:59 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 88091 00:15:05.531 20:31:59 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:05.531 20:31:59 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:05.531 20:31:59 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:15:05.531 20:31:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.531 20:31:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.531 ************************************ 00:15:05.531 START TEST bdev_hello_world 00:15:05.531 ************************************ 00:15:05.531 20:31:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:05.531 [2024-07-12 20:31:59.544261] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:05.531 [2024-07-12 20:31:59.544469] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88437 ] 00:15:05.789 [2024-07-12 20:31:59.687159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:05.789 [2024-07-12 20:31:59.706658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.789 [2024-07-12 20:31:59.773486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.074 [2024-07-12 20:31:59.978534] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:06.074 [2024-07-12 20:31:59.978585] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:06.074 [2024-07-12 20:31:59.978616] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:06.074 [2024-07-12 20:31:59.981119] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:06.074 [2024-07-12 20:31:59.981499] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:06.074 [2024-07-12 20:31:59.981540] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:06.074 [2024-07-12 20:31:59.981759] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:06.074 00:15:06.074 [2024-07-12 20:31:59.981794] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:06.351 00:15:06.351 real 0m0.773s 00:15:06.351 user 0m0.435s 00:15:06.351 sys 0m0.229s 00:15:06.351 20:32:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.351 20:32:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:06.351 ************************************ 00:15:06.351 END TEST bdev_hello_world 00:15:06.351 ************************************ 00:15:06.351 20:32:00 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:06.351 20:32:00 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:15:06.351 20:32:00 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:06.351 20:32:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.351 20:32:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.351 ************************************ 00:15:06.351 START TEST bdev_bounds 00:15:06.351 ************************************ 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=88468 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:06.351 Process bdevio pid: 88468 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 88468' 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 88468 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 88468 ']' 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.351 20:32:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:06.351 [2024-07-12 20:32:00.371482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:06.351 [2024-07-12 20:32:00.371655] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88468 ] 00:15:06.609 [2024-07-12 20:32:00.514908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:06.609 [2024-07-12 20:32:00.534496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.609 [2024-07-12 20:32:00.611483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.609 [2024-07-12 20:32:00.611578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.609 [2024-07-12 20:32:00.611672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.174 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.174 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:15:07.174 20:32:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:07.432 I/O targets: 00:15:07.432 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:07.432 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:07.432 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:07.432 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:07.432 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:07.432 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:07.432 00:15:07.432 00:15:07.432 CUnit - A unit testing framework for C - Version 2.1-3 00:15:07.432 http://cunit.sourceforge.net/ 00:15:07.432 00:15:07.432 00:15:07.432 Suite: bdevio tests on: nvme3n1 00:15:07.432 Test: blockdev write read block ...passed 00:15:07.432 Test: blockdev write zeroes read block ...passed 00:15:07.432 Test: blockdev write zeroes read no split ...passed 00:15:07.432 Test: blockdev write zeroes read split ...passed 00:15:07.432 Test: blockdev write zeroes read split partial ...passed 00:15:07.432 Test: blockdev reset ...passed 00:15:07.433 Test: blockdev write read 8 blocks ...passed 00:15:07.433 Test: blockdev write read size > 128k ...passed 00:15:07.433 Test: blockdev write read invalid size ...passed 00:15:07.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.433 Test: blockdev write read max offset ...passed 00:15:07.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.433 Test: blockdev writev readv 8 blocks ...passed 00:15:07.433 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.433 Test: blockdev writev readv block ...passed 00:15:07.433 Test: blockdev writev readv size > 128k ...passed 00:15:07.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.433 Test: blockdev comparev and writev ...passed 00:15:07.433 Test: blockdev nvme passthru rw ...passed 00:15:07.433 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.433 Test: blockdev nvme admin passthru ...passed 00:15:07.433 Test: blockdev copy ...passed 00:15:07.433 Suite: bdevio tests on: nvme2n3 00:15:07.433 Test: blockdev write read block ...passed 00:15:07.433 Test: blockdev write zeroes read block ...passed 00:15:07.433 Test: blockdev write zeroes read no split ...passed 00:15:07.433 Test: blockdev write zeroes read split ...passed 00:15:07.433 Test: blockdev write zeroes read split partial ...passed 00:15:07.433 Test: blockdev reset ...passed 00:15:07.433 Test: blockdev write read 8 blocks ...passed 00:15:07.433 Test: blockdev write read size > 128k ...passed 00:15:07.433 Test: blockdev write read invalid size ...passed 00:15:07.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.433 Test: blockdev write read max offset ...passed 00:15:07.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.433 Test: blockdev writev readv 8 blocks ...passed 00:15:07.433 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.433 Test: blockdev writev readv block ...passed 00:15:07.433 Test: blockdev writev readv size > 128k ...passed 00:15:07.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.433 Test: blockdev comparev and writev ...passed 00:15:07.433 Test: blockdev nvme passthru rw ...passed 00:15:07.433 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.433 Test: blockdev nvme admin passthru ...passed 00:15:07.433 Test: blockdev copy ...passed 00:15:07.433 Suite: bdevio tests on: nvme2n2 00:15:07.433 Test: blockdev write read block ...passed 00:15:07.433 Test: blockdev write zeroes read block ...passed 00:15:07.433 Test: blockdev write zeroes read no split ...passed 00:15:07.433 Test: blockdev write zeroes read split ...passed 00:15:07.433 Test: blockdev write zeroes read split partial ...passed 00:15:07.433 Test: blockdev reset ...passed 00:15:07.433 Test: blockdev write read 8 blocks ...passed 00:15:07.433 Test: blockdev write read size > 128k ...passed 00:15:07.433 Test: blockdev write read invalid size ...passed 00:15:07.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.433 Test: blockdev write read max offset ...passed 00:15:07.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.433 Test: blockdev writev readv 8 blocks ...passed 00:15:07.433 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.433 Test: blockdev writev readv block ...passed 00:15:07.433 Test: blockdev writev readv size > 128k ...passed 00:15:07.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.433 Test: blockdev comparev and writev ...passed 00:15:07.433 Test: blockdev nvme passthru rw ...passed 00:15:07.433 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.433 Test: blockdev nvme admin passthru ...passed 00:15:07.433 Test: blockdev copy ...passed 00:15:07.433 Suite: bdevio tests on: nvme2n1 00:15:07.433 Test: blockdev write read block ...passed 00:15:07.433 Test: blockdev write zeroes read block ...passed 00:15:07.433 Test: blockdev write zeroes read no split ...passed 00:15:07.433 Test: blockdev write zeroes read split ...passed 00:15:07.433 Test: blockdev write zeroes read split partial ...passed 00:15:07.433 Test: blockdev reset ...passed 00:15:07.433 Test: blockdev write read 8 blocks ...passed 00:15:07.433 Test: blockdev write read size > 128k ...passed 00:15:07.433 Test: blockdev write read invalid size ...passed 00:15:07.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.433 Test: blockdev write read max offset ...passed 00:15:07.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.433 Test: blockdev writev readv 8 blocks ...passed 00:15:07.433 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.433 Test: blockdev writev readv block ...passed 00:15:07.433 Test: blockdev writev readv size > 128k ...passed 00:15:07.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.433 Test: blockdev comparev and writev ...passed 00:15:07.433 Test: blockdev nvme passthru rw ...passed 00:15:07.433 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.433 Test: blockdev nvme admin passthru ...passed 00:15:07.433 Test: blockdev copy ...passed 00:15:07.433 Suite: bdevio tests on: nvme1n1 00:15:07.433 Test: blockdev write read block ...passed 00:15:07.433 Test: blockdev write zeroes read block ...passed 00:15:07.433 Test: blockdev write zeroes read no split ...passed 00:15:07.433 Test: blockdev write zeroes read split ...passed 00:15:07.433 Test: blockdev write zeroes read split partial ...passed 00:15:07.433 Test: blockdev reset ...passed 00:15:07.433 Test: blockdev write read 8 blocks ...passed 00:15:07.433 Test: blockdev write read size > 128k ...passed 00:15:07.433 Test: blockdev write read invalid size ...passed 00:15:07.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.433 Test: blockdev write read max offset ...passed 00:15:07.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.433 Test: blockdev writev readv 8 blocks ...passed 00:15:07.433 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.433 Test: blockdev writev readv block ...passed 00:15:07.433 Test: blockdev writev readv size > 128k ...passed 00:15:07.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.433 Test: blockdev comparev and writev ...passed 00:15:07.433 Test: blockdev nvme passthru rw ...passed 00:15:07.433 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.433 Test: blockdev nvme admin passthru ...passed 00:15:07.433 Test: blockdev copy ...passed 00:15:07.433 Suite: bdevio tests on: nvme0n1 00:15:07.433 Test: blockdev write read block ...passed 00:15:07.433 Test: blockdev write zeroes read block ...passed 00:15:07.433 Test: blockdev write zeroes read no split ...passed 00:15:07.433 Test: blockdev write zeroes read split ...passed 00:15:07.433 Test: blockdev write zeroes read split partial ...passed 00:15:07.433 Test: blockdev reset ...passed 00:15:07.433 Test: blockdev write read 8 blocks ...passed 00:15:07.433 Test: blockdev write read size > 128k ...passed 00:15:07.433 Test: blockdev write read invalid size ...passed 00:15:07.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:07.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:07.433 Test: blockdev write read max offset ...passed 00:15:07.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:07.433 Test: blockdev writev readv 8 blocks ...passed 00:15:07.433 Test: blockdev writev readv 30 x 1block ...passed 00:15:07.433 Test: blockdev writev readv block ...passed 00:15:07.433 Test: blockdev writev readv size > 128k ...passed 00:15:07.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:07.433 Test: blockdev comparev and writev ...passed 00:15:07.433 Test: blockdev nvme passthru rw ...passed 00:15:07.433 Test: blockdev nvme passthru vendor specific ...passed 00:15:07.433 Test: blockdev nvme admin passthru ...passed 00:15:07.433 Test: blockdev copy ...passed 00:15:07.433 00:15:07.433 Run Summary: Type Total Ran Passed Failed Inactive 00:15:07.433 suites 6 6 n/a 0 0 00:15:07.433 tests 138 138 138 0 0 00:15:07.433 asserts 780 780 780 0 n/a 00:15:07.433 00:15:07.433 Elapsed time = 0.285 seconds 00:15:07.433 0 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 88468 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 88468 ']' 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 88468 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88468 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88468' 00:15:07.433 killing process with pid 88468 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 88468 00:15:07.433 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 88468 00:15:07.692 20:32:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:15:07.692 00:15:07.692 real 0m1.517s 00:15:07.692 user 0m3.658s 00:15:07.692 sys 0m0.376s 00:15:07.692 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.692 20:32:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:07.692 ************************************ 00:15:07.692 END TEST bdev_bounds 00:15:07.692 ************************************ 00:15:07.951 20:32:01 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:07.951 20:32:01 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:07.951 20:32:01 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:07.951 20:32:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.951 20:32:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.951 ************************************ 00:15:07.951 START TEST bdev_nbd 00:15:07.951 ************************************ 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=88519 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 88519 /var/tmp/spdk-nbd.sock 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 88519 ']' 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.951 20:32:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:07.951 [2024-07-12 20:32:01.958895] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:07.951 [2024-07-12 20:32:01.959090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.210 [2024-07-12 20:32:02.111931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:08.210 [2024-07-12 20:32:02.131573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.210 [2024-07-12 20:32:02.217065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:08.778 20:32:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.036 1+0 records in 00:15:09.036 1+0 records out 00:15:09.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443191 s, 9.2 MB/s 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.036 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:09.037 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.037 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:09.037 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:09.037 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:09.037 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.037 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.295 1+0 records in 00:15:09.295 1+0 records out 00:15:09.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610087 s, 6.7 MB/s 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.295 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.554 1+0 records in 00:15:09.554 1+0 records out 00:15:09.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524624 s, 7.8 MB/s 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:09.554 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.812 1+0 records in 00:15:09.812 1+0 records out 00:15:09.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629598 s, 6.5 MB/s 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:09.812 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.070 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:10.070 20:32:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:10.070 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.070 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.070 20:32:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.328 1+0 records in 00:15:10.328 1+0 records out 00:15:10.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063055 s, 6.5 MB/s 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.328 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.587 1+0 records in 00:15:10.587 1+0 records out 00:15:10.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575489 s, 7.1 MB/s 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:10.587 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd0", 00:15:10.846 "bdev_name": "nvme0n1" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd1", 00:15:10.846 "bdev_name": "nvme1n1" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd2", 00:15:10.846 "bdev_name": "nvme2n1" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd3", 00:15:10.846 "bdev_name": "nvme2n2" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd4", 00:15:10.846 "bdev_name": "nvme2n3" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd5", 00:15:10.846 "bdev_name": "nvme3n1" 00:15:10.846 } 00:15:10.846 ]' 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd0", 00:15:10.846 "bdev_name": "nvme0n1" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd1", 00:15:10.846 "bdev_name": "nvme1n1" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd2", 00:15:10.846 "bdev_name": "nvme2n1" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd3", 00:15:10.846 "bdev_name": "nvme2n2" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd4", 00:15:10.846 "bdev_name": "nvme2n3" 00:15:10.846 }, 00:15:10.846 { 00:15:10.846 "nbd_device": "/dev/nbd5", 00:15:10.846 "bdev_name": "nvme3n1" 00:15:10.846 } 00:15:10.846 ]' 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.846 20:32:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:11.104 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.104 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.104 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.104 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.104 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.105 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.105 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.105 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.105 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.105 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.428 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.692 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.951 20:32:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:11.951 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:11.951 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:11.951 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:11.951 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.951 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.951 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:12.210 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.210 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.210 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.210 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.468 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:12.727 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:12.728 20:32:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:12.986 /dev/nbd0 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.986 1+0 records in 00:15:12.986 1+0 records out 00:15:12.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421616 s, 9.7 MB/s 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:12.986 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:13.244 /dev/nbd1 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.244 1+0 records in 00:15:13.244 1+0 records out 00:15:13.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566403 s, 7.2 MB/s 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.244 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:13.503 /dev/nbd10 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.504 1+0 records in 00:15:13.504 1+0 records out 00:15:13.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429802 s, 9.5 MB/s 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.504 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:13.764 /dev/nbd11 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.764 1+0 records in 00:15:13.764 1+0 records out 00:15:13.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075255 s, 5.4 MB/s 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:13.764 20:32:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:14.023 /dev/nbd12 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.023 1+0 records in 00:15:14.023 1+0 records out 00:15:14.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013402 s, 3.1 MB/s 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.023 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:14.283 /dev/nbd13 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.283 1+0 records in 00:15:14.283 1+0 records out 00:15:14.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629915 s, 6.5 MB/s 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.283 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:14.542 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd0", 00:15:14.542 "bdev_name": "nvme0n1" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd1", 00:15:14.542 "bdev_name": "nvme1n1" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd10", 00:15:14.542 "bdev_name": "nvme2n1" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd11", 00:15:14.542 "bdev_name": "nvme2n2" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd12", 00:15:14.542 "bdev_name": "nvme2n3" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd13", 00:15:14.542 "bdev_name": "nvme3n1" 00:15:14.542 } 00:15:14.542 ]' 00:15:14.542 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd0", 00:15:14.542 "bdev_name": "nvme0n1" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd1", 00:15:14.542 "bdev_name": "nvme1n1" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd10", 00:15:14.542 "bdev_name": "nvme2n1" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd11", 00:15:14.542 "bdev_name": "nvme2n2" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd12", 00:15:14.542 "bdev_name": "nvme2n3" 00:15:14.542 }, 00:15:14.542 { 00:15:14.542 "nbd_device": "/dev/nbd13", 00:15:14.542 "bdev_name": "nvme3n1" 00:15:14.542 } 00:15:14.542 ]' 00:15:14.542 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:14.801 /dev/nbd1 00:15:14.801 /dev/nbd10 00:15:14.801 /dev/nbd11 00:15:14.801 /dev/nbd12 00:15:14.801 /dev/nbd13' 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:14.801 /dev/nbd1 00:15:14.801 /dev/nbd10 00:15:14.801 /dev/nbd11 00:15:14.801 /dev/nbd12 00:15:14.801 /dev/nbd13' 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:14.801 256+0 records in 00:15:14.801 256+0 records out 00:15:14.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00883514 s, 119 MB/s 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:14.801 256+0 records in 00:15:14.801 256+0 records out 00:15:14.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155027 s, 6.8 MB/s 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:14.801 20:32:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:15.060 256+0 records in 00:15:15.060 256+0 records out 00:15:15.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134354 s, 7.8 MB/s 00:15:15.060 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.060 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:15.060 256+0 records in 00:15:15.060 256+0 records out 00:15:15.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146093 s, 7.2 MB/s 00:15:15.060 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.060 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:15.319 256+0 records in 00:15:15.319 256+0 records out 00:15:15.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153979 s, 6.8 MB/s 00:15:15.319 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.319 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:15.579 256+0 records in 00:15:15.579 256+0 records out 00:15:15.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150997 s, 6.9 MB/s 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:15.579 256+0 records in 00:15:15.579 256+0 records out 00:15:15.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138194 s, 7.6 MB/s 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.579 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.838 20:32:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:16.096 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.097 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.355 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.613 20:32:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.871 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.129 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.410 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:17.410 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:17.410 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:17.683 malloc_lvol_verify 00:15:17.683 20:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:17.942 511354dd-1858-4d10-8074-1139c38f7342 00:15:18.200 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:18.200 8eee3703-d9ad-403e-a16a-be86846780f0 00:15:18.200 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:18.459 /dev/nbd0 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:18.459 mke2fs 1.46.5 (30-Dec-2021) 00:15:18.459 Discarding device blocks: 0/4096 done 00:15:18.459 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:18.459 00:15:18.459 Allocating group tables: 0/1 done 00:15:18.459 Writing inode tables: 0/1 done 00:15:18.459 Creating journal (1024 blocks): done 00:15:18.459 Writing superblocks and filesystem accounting information: 0/1 done 00:15:18.459 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.459 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:18.717 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 88519 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 88519 ']' 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 88519 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88519 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88519' 00:15:18.718 killing process with pid 88519 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 88519 00:15:18.718 20:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 88519 00:15:18.976 20:32:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:15:18.976 00:15:18.976 real 0m11.258s 00:15:18.976 user 0m16.143s 00:15:18.976 sys 0m4.053s 00:15:18.976 20:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.976 20:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:18.976 ************************************ 00:15:18.976 END TEST bdev_nbd 00:15:18.976 ************************************ 00:15:19.234 20:32:13 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:19.234 20:32:13 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:15:19.234 20:32:13 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:15:19.234 20:32:13 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:15:19.234 20:32:13 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:15:19.234 20:32:13 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:19.234 20:32:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.234 20:32:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.234 ************************************ 00:15:19.234 START TEST bdev_fio 00:15:19.234 ************************************ 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:19.234 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:19.234 ************************************ 00:15:19.234 START TEST bdev_fio_rw_verify 00:15:19.234 ************************************ 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:19.234 20:32:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:19.492 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.492 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.492 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.492 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.492 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.492 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:19.492 fio-3.35 00:15:19.492 Starting 6 threads 00:15:31.693 00:15:31.693 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=88927: Fri Jul 12 20:32:24 2024 00:15:31.693 read: IOPS=30.3k, BW=118MiB/s (124MB/s)(1184MiB/10001msec) 00:15:31.693 slat (usec): min=3, max=1284, avg= 6.73, stdev= 4.83 00:15:31.693 clat (usec): min=93, max=3878, avg=623.41, stdev=199.33 00:15:31.693 lat (usec): min=100, max=3909, avg=630.14, stdev=200.01 00:15:31.693 clat percentiles (usec): 00:15:31.693 | 50.000th=[ 660], 99.000th=[ 1074], 99.900th=[ 1516], 99.990th=[ 2999], 00:15:31.693 | 99.999th=[ 3851] 00:15:31.693 write: IOPS=30.7k, BW=120MiB/s (126MB/s)(1198MiB/10001msec); 0 zone resets 00:15:31.693 slat (usec): min=13, max=3240, avg=24.03, stdev=23.54 00:15:31.693 clat (usec): min=89, max=4552, avg=697.44, stdev=201.96 00:15:31.693 lat (usec): min=107, max=4593, avg=721.47, stdev=203.71 00:15:31.693 clat percentiles (usec): 00:15:31.693 | 50.000th=[ 717], 99.000th=[ 1221], 99.900th=[ 1713], 99.990th=[ 2671], 00:15:31.693 | 99.999th=[ 4424] 00:15:31.693 bw ( KiB/s): min=98988, max=147104, per=100.00%, avg=123269.11, stdev=2455.98, samples=114 00:15:31.693 iops : min=24746, max=36776, avg=30816.95, stdev=614.01, samples=114 00:15:31.693 lat (usec) : 100=0.01%, 250=3.05%, 500=17.69%, 750=45.36%, 1000=30.80% 00:15:31.693 lat (msec) : 2=3.06%, 4=0.03%, 10=0.01% 00:15:31.693 cpu : usr=61.31%, sys=26.43%, ctx=7736, majf=0, minf=25676 00:15:31.693 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.693 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.693 issued rwts: total=303120,306770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.693 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:31.693 00:15:31.693 Run status group 0 (all jobs): 00:15:31.693 READ: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=1184MiB (1242MB), run=10001-10001msec 00:15:31.693 WRITE: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=1198MiB (1257MB), run=10001-10001msec 00:15:31.693 ----------------------------------------------------- 00:15:31.693 Suppressions used: 00:15:31.693 count bytes template 00:15:31.693 6 48 /usr/src/fio/parse.c 00:15:31.693 3432 329472 /usr/src/fio/iolog.c 00:15:31.693 1 8 libtcmalloc_minimal.so 00:15:31.693 1 904 libcrypto.so 00:15:31.693 ----------------------------------------------------- 00:15:31.693 00:15:31.693 00:15:31.693 real 0m11.310s 00:15:31.693 user 0m37.641s 00:15:31.693 sys 0m16.206s 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:31.693 ************************************ 00:15:31.693 END TEST bdev_fio_rw_verify 00:15:31.693 ************************************ 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9b59c1dd-344a-438b-b8af-f30688cf4460"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9b59c1dd-344a-438b-b8af-f30688cf4460",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d9192e85-32b5-4ec2-90a2-c9a32a618722"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d9192e85-32b5-4ec2-90a2-c9a32a618722",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "53bfb587-5a8d-4313-b30b-d5543da957dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "53bfb587-5a8d-4313-b30b-d5543da957dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e065821d-6881-4a81-a06f-73208d8e7889"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e065821d-6881-4a81-a06f-73208d8e7889",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0306990e-c5d8-43b6-bf00-e77ad4a1e536"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0306990e-c5d8-43b6-bf00-e77ad4a1e536",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "77f5a2c6-5531-4f83-8fa2-10a17e563123"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "77f5a2c6-5531-4f83-8fa2-10a17e563123",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:31.693 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:15:31.694 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:31.694 /home/vagrant/spdk_repo/spdk 00:15:31.694 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:15:31.694 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:15:31.694 20:32:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:15:31.694 00:15:31.694 real 0m11.484s 00:15:31.694 user 0m37.742s 00:15:31.694 sys 0m16.279s 00:15:31.694 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.694 20:32:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:31.694 ************************************ 00:15:31.694 END TEST bdev_fio 00:15:31.694 ************************************ 00:15:31.694 20:32:24 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:31.694 20:32:24 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:31.694 20:32:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:31.694 20:32:24 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:31.694 20:32:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.694 20:32:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.694 ************************************ 00:15:31.694 START TEST bdev_verify 00:15:31.694 ************************************ 00:15:31.694 20:32:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:31.694 [2024-07-12 20:32:24.790779] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:31.694 [2024-07-12 20:32:24.790959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89094 ] 00:15:31.694 [2024-07-12 20:32:24.943757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:31.694 [2024-07-12 20:32:24.966852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:31.694 [2024-07-12 20:32:25.043745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.694 [2024-07-12 20:32:25.043807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.694 Running I/O for 5 seconds... 00:15:36.964 00:15:36.964 Latency(us) 00:15:36.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.964 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.964 Verification LBA range: start 0x0 length 0xa0000 00:15:36.964 nvme0n1 : 5.06 1745.75 6.82 0.00 0.00 73185.17 5659.93 69587.32 00:15:36.964 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.964 Verification LBA range: start 0xa0000 length 0xa0000 00:15:36.964 nvme0n1 : 5.06 1693.97 6.62 0.00 0.00 74707.48 9592.09 80073.08 00:15:36.964 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.964 Verification LBA range: start 0x0 length 0xbd0bd 00:15:36.964 nvme1n1 : 5.06 2948.69 11.52 0.00 0.00 43101.45 3872.58 65774.31 00:15:36.964 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.964 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:36.964 nvme1n1 : 5.06 2816.83 11.00 0.00 0.00 45191.74 3366.17 70540.57 00:15:36.965 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x0 length 0x80000 00:15:36.965 nvme2n1 : 5.03 1754.78 6.85 0.00 0.00 72549.30 8340.95 66727.56 00:15:36.965 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x80000 length 0x80000 00:15:36.965 nvme2n1 : 5.05 1697.39 6.63 0.00 0.00 75089.52 9949.56 68157.44 00:15:36.965 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x0 length 0x80000 00:15:36.965 nvme2n2 : 5.07 1743.18 6.81 0.00 0.00 72890.80 10307.03 63867.81 00:15:36.965 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x80000 length 0x80000 00:15:36.965 nvme2n2 : 5.06 1696.19 6.63 0.00 0.00 75013.77 9294.20 63867.81 00:15:36.965 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x0 length 0x80000 00:15:36.965 nvme2n3 : 5.07 1742.64 6.81 0.00 0.00 72782.92 10962.39 64821.06 00:15:36.965 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x80000 length 0x80000 00:15:36.965 nvme2n3 : 5.05 1699.07 6.64 0.00 0.00 74736.30 6196.13 63867.81 00:15:36.965 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x0 length 0x20000 00:15:36.965 nvme3n1 : 5.07 1742.11 6.81 0.00 0.00 72679.72 8162.21 69587.32 00:15:36.965 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.965 Verification LBA range: start 0x20000 length 0x20000 00:15:36.965 nvme3n1 : 5.06 1695.22 6.62 0.00 0.00 74764.41 9175.04 71970.44 00:15:36.965 =================================================================================================================== 00:15:36.965 Total : 22975.81 89.75 0.00 0.00 66364.79 3366.17 80073.08 00:15:36.965 00:15:36.965 real 0m5.929s 00:15:36.965 user 0m9.160s 00:15:36.965 sys 0m1.717s 00:15:36.965 20:32:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.965 20:32:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:36.965 ************************************ 00:15:36.965 END TEST bdev_verify 00:15:36.965 ************************************ 00:15:36.965 20:32:30 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:36.965 20:32:30 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:36.965 20:32:30 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:36.965 20:32:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.965 20:32:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.965 ************************************ 00:15:36.965 START TEST bdev_verify_big_io 00:15:36.965 ************************************ 00:15:36.965 20:32:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:36.965 [2024-07-12 20:32:30.755796] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:36.965 [2024-07-12 20:32:30.755982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89183 ] 00:15:36.965 [2024-07-12 20:32:30.897758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:36.965 [2024-07-12 20:32:30.917038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:36.965 [2024-07-12 20:32:31.003984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.965 [2024-07-12 20:32:31.004070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.223 Running I/O for 5 seconds... 00:15:43.858 00:15:43.858 Latency(us) 00:15:43.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.858 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x0 length 0xa000 00:15:43.858 nvme0n1 : 5.89 195.67 12.23 0.00 0.00 631703.53 88652.33 842673.80 00:15:43.858 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0xa000 length 0xa000 00:15:43.858 nvme0n1 : 5.87 123.99 7.75 0.00 0.00 891694.90 7060.01 2913134.78 00:15:43.858 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x0 length 0xbd0b 00:15:43.858 nvme1n1 : 5.84 205.53 12.85 0.00 0.00 585354.82 8817.57 728283.69 00:15:43.858 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:43.858 nvme1n1 : 5.85 136.76 8.55 0.00 0.00 916040.16 66727.56 1479445.41 00:15:43.858 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x0 length 0x8000 00:15:43.858 nvme2n1 : 5.89 153.48 9.59 0.00 0.00 754743.79 91035.46 941811.90 00:15:43.858 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x8000 length 0x8000 00:15:43.858 nvme2n1 : 5.85 140.80 8.80 0.00 0.00 866093.23 64344.44 785478.75 00:15:43.858 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x0 length 0x8000 00:15:43.858 nvme2n2 : 5.90 181.62 11.35 0.00 0.00 626648.21 55288.55 876990.84 00:15:43.858 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x8000 length 0x8000 00:15:43.858 nvme2n2 : 5.88 93.90 5.87 0.00 0.00 1280573.01 14656.23 1860745.77 00:15:43.858 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x0 length 0x8000 00:15:43.858 nvme2n3 : 5.91 148.98 9.31 0.00 0.00 743129.49 52428.80 1929379.84 00:15:43.858 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x8000 length 0x8000 00:15:43.858 nvme2n3 : 5.87 106.24 6.64 0.00 0.00 1100387.63 76260.07 1136275.08 00:15:43.858 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x0 length 0x2000 00:15:43.858 nvme3n1 : 5.91 159.70 9.98 0.00 0.00 680529.95 12332.68 2013265.92 00:15:43.858 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:43.858 Verification LBA range: start 0x2000 length 0x2000 00:15:43.858 nvme3n1 : 5.86 153.00 9.56 0.00 0.00 743409.11 61008.06 1090519.04 00:15:43.858 =================================================================================================================== 00:15:43.858 Total : 1799.68 112.48 0.00 0.00 778728.95 7060.01 2913134.78 00:15:43.858 00:15:43.858 real 0m6.846s 00:15:43.858 user 0m12.269s 00:15:43.858 sys 0m0.644s 00:15:43.858 20:32:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.858 20:32:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.858 ************************************ 00:15:43.858 END TEST bdev_verify_big_io 00:15:43.858 ************************************ 00:15:43.858 20:32:37 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:43.858 20:32:37 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:43.858 20:32:37 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:43.858 20:32:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.858 20:32:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.858 ************************************ 00:15:43.858 START TEST bdev_write_zeroes 00:15:43.858 ************************************ 00:15:43.858 20:32:37 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:43.858 [2024-07-12 20:32:37.670884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:43.858 [2024-07-12 20:32:37.671070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89282 ] 00:15:43.858 [2024-07-12 20:32:37.824147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:43.859 [2024-07-12 20:32:37.846813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.859 [2024-07-12 20:32:37.937658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.130 Running I/O for 1 seconds... 00:15:45.065 00:15:45.065 Latency(us) 00:15:45.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.065 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.065 nvme0n1 : 1.01 8777.65 34.29 0.00 0.00 14565.08 8281.37 18826.71 00:15:45.065 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.065 nvme1n1 : 1.01 16336.86 63.82 0.00 0.00 7797.33 4051.32 12571.00 00:15:45.065 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.065 nvme2n1 : 1.02 8826.00 34.48 0.00 0.00 14407.21 5183.30 19660.80 00:15:45.065 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.065 nvme2n2 : 1.02 8815.83 34.44 0.00 0.00 14411.74 5481.19 20375.74 00:15:45.065 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.065 nvme2n3 : 1.02 8805.60 34.40 0.00 0.00 14414.29 5808.87 20494.89 00:15:45.065 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.065 nvme3n1 : 1.02 8795.34 34.36 0.00 0.00 14417.07 6136.55 20733.21 00:15:45.065 =================================================================================================================== 00:15:45.065 Total : 60357.29 235.77 0.00 0.00 12646.77 4051.32 20733.21 00:15:45.324 00:15:45.324 real 0m1.887s 00:15:45.324 user 0m1.106s 00:15:45.324 sys 0m0.616s 00:15:45.324 20:32:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.324 20:32:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:45.324 ************************************ 00:15:45.324 END TEST bdev_write_zeroes 00:15:45.324 ************************************ 00:15:45.582 20:32:39 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:45.582 20:32:39 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:45.583 20:32:39 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:45.583 20:32:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.583 20:32:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.583 ************************************ 00:15:45.583 START TEST bdev_json_nonenclosed 00:15:45.583 ************************************ 00:15:45.583 20:32:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:45.583 [2024-07-12 20:32:39.613409] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:45.583 [2024-07-12 20:32:39.613597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89326 ] 00:15:45.841 [2024-07-12 20:32:39.765088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:45.841 [2024-07-12 20:32:39.787452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.841 [2024-07-12 20:32:39.874611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.841 [2024-07-12 20:32:39.874742] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:45.841 [2024-07-12 20:32:39.874794] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:45.841 [2024-07-12 20:32:39.874835] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.100 00:15:46.100 real 0m0.480s 00:15:46.100 user 0m0.237s 00:15:46.100 sys 0m0.138s 00:15:46.100 20:32:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:15:46.100 20:32:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.100 20:32:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:46.100 ************************************ 00:15:46.100 END TEST bdev_json_nonenclosed 00:15:46.100 ************************************ 00:15:46.100 20:32:40 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:15:46.100 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:15:46.100 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.100 20:32:40 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:46.100 20:32:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.100 20:32:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.100 ************************************ 00:15:46.100 START TEST bdev_json_nonarray 00:15:46.100 ************************************ 00:15:46.100 20:32:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:46.100 [2024-07-12 20:32:40.123989] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:46.100 [2024-07-12 20:32:40.124251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89350 ] 00:15:46.359 [2024-07-12 20:32:40.267067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:46.359 [2024-07-12 20:32:40.285704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.359 [2024-07-12 20:32:40.372207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.359 [2024-07-12 20:32:40.372376] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:46.359 [2024-07-12 20:32:40.372427] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:46.359 [2024-07-12 20:32:40.372464] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.359 00:15:46.359 real 0m0.453s 00:15:46.359 user 0m0.219s 00:15:46.359 sys 0m0.130s 00:15:46.359 20:32:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:15:46.359 20:32:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.359 20:32:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:46.359 ************************************ 00:15:46.359 END TEST bdev_json_nonarray 00:15:46.359 ************************************ 00:15:46.617 20:32:40 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:46.617 20:32:40 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:47.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:49.086 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:49.086 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:49.086 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:49.086 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:49.086 00:15:49.086 real 0m51.297s 00:15:49.086 user 1m29.763s 00:15:49.086 sys 0m28.484s 00:15:49.086 20:32:43 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.086 ************************************ 00:15:49.086 END TEST blockdev_xnvme 00:15:49.086 ************************************ 00:15:49.086 20:32:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.086 20:32:43 -- common/autotest_common.sh@1142 -- # return 0 00:15:49.086 20:32:43 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:49.086 20:32:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:49.086 20:32:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.086 20:32:43 -- common/autotest_common.sh@10 -- # set +x 00:15:49.086 ************************************ 00:15:49.086 START TEST ublk 00:15:49.086 ************************************ 00:15:49.086 20:32:43 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:49.086 * Looking for test storage... 00:15:49.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:49.086 20:32:43 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:49.086 20:32:43 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:49.086 20:32:43 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:49.086 20:32:43 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:49.086 20:32:43 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:49.086 20:32:43 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:49.086 20:32:43 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:49.086 20:32:43 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:49.086 20:32:43 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:49.086 20:32:43 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:49.086 20:32:43 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.086 20:32:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.086 ************************************ 00:15:49.086 START TEST test_save_ublk_config 00:15:49.086 ************************************ 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=89628 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 89628 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 89628 ']' 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.086 20:32:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:49.344 [2024-07-12 20:32:43.277850] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:49.344 [2024-07-12 20:32:43.278048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89628 ] 00:15:49.344 [2024-07-12 20:32:43.429154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.344 [2024-07-12 20:32:43.450002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.602 [2024-07-12 20:32:43.552362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.167 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.167 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:50.167 20:32:44 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:50.167 20:32:44 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:50.167 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.167 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.168 [2024-07-12 20:32:44.194273] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:50.168 [2024-07-12 20:32:44.194725] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:50.168 malloc0 00:15:50.168 [2024-07-12 20:32:44.232490] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:50.168 [2024-07-12 20:32:44.232634] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:50.168 [2024-07-12 20:32:44.232662] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:50.168 [2024-07-12 20:32:44.232682] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:50.168 [2024-07-12 20:32:44.241377] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:50.168 [2024-07-12 20:32:44.241411] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:50.168 [2024-07-12 20:32:44.248267] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:50.168 [2024-07-12 20:32:44.248417] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:50.168 [2024-07-12 20:32:44.265293] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:50.168 0 00:15:50.168 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.168 20:32:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:50.168 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.168 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.426 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.426 20:32:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:50.426 "subsystems": [ 00:15:50.426 { 00:15:50.426 "subsystem": "keyring", 00:15:50.426 "config": [] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "iobuf", 00:15:50.426 "config": [ 00:15:50.426 { 00:15:50.426 "method": "iobuf_set_options", 00:15:50.426 "params": { 00:15:50.426 "small_pool_count": 8192, 00:15:50.426 "large_pool_count": 1024, 00:15:50.426 "small_bufsize": 8192, 00:15:50.426 "large_bufsize": 135168 00:15:50.426 } 00:15:50.426 } 00:15:50.426 ] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "sock", 00:15:50.426 "config": [ 00:15:50.426 { 00:15:50.426 "method": "sock_set_default_impl", 00:15:50.426 "params": { 00:15:50.426 "impl_name": "posix" 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "sock_impl_set_options", 00:15:50.426 "params": { 00:15:50.426 "impl_name": "ssl", 00:15:50.426 "recv_buf_size": 4096, 00:15:50.426 "send_buf_size": 4096, 00:15:50.426 "enable_recv_pipe": true, 00:15:50.426 "enable_quickack": false, 00:15:50.426 "enable_placement_id": 0, 00:15:50.426 "enable_zerocopy_send_server": true, 00:15:50.426 "enable_zerocopy_send_client": false, 00:15:50.426 "zerocopy_threshold": 0, 00:15:50.426 "tls_version": 0, 00:15:50.426 "enable_ktls": false 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "sock_impl_set_options", 00:15:50.426 "params": { 00:15:50.426 "impl_name": "posix", 00:15:50.426 "recv_buf_size": 2097152, 00:15:50.426 "send_buf_size": 2097152, 00:15:50.426 "enable_recv_pipe": true, 00:15:50.426 "enable_quickack": false, 00:15:50.426 "enable_placement_id": 0, 00:15:50.426 "enable_zerocopy_send_server": true, 00:15:50.426 "enable_zerocopy_send_client": false, 00:15:50.426 "zerocopy_threshold": 0, 00:15:50.426 "tls_version": 0, 00:15:50.426 "enable_ktls": false 00:15:50.426 } 00:15:50.426 } 00:15:50.426 ] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "vmd", 00:15:50.426 "config": [] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "accel", 00:15:50.426 "config": [ 00:15:50.426 { 00:15:50.426 "method": "accel_set_options", 00:15:50.426 "params": { 00:15:50.426 "small_cache_size": 128, 00:15:50.426 "large_cache_size": 16, 00:15:50.426 "task_count": 2048, 00:15:50.426 "sequence_count": 2048, 00:15:50.426 "buf_count": 2048 00:15:50.426 } 00:15:50.426 } 00:15:50.426 ] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "bdev", 00:15:50.426 "config": [ 00:15:50.426 { 00:15:50.426 "method": "bdev_set_options", 00:15:50.426 "params": { 00:15:50.426 "bdev_io_pool_size": 65535, 00:15:50.426 "bdev_io_cache_size": 256, 00:15:50.426 "bdev_auto_examine": true, 00:15:50.426 "iobuf_small_cache_size": 128, 00:15:50.426 "iobuf_large_cache_size": 16 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "bdev_raid_set_options", 00:15:50.426 "params": { 00:15:50.426 "process_window_size_kb": 1024 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "bdev_iscsi_set_options", 00:15:50.426 "params": { 00:15:50.426 "timeout_sec": 30 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "bdev_nvme_set_options", 00:15:50.426 "params": { 00:15:50.426 "action_on_timeout": "none", 00:15:50.426 "timeout_us": 0, 00:15:50.426 "timeout_admin_us": 0, 00:15:50.426 "keep_alive_timeout_ms": 10000, 00:15:50.426 "arbitration_burst": 0, 00:15:50.426 "low_priority_weight": 0, 00:15:50.426 "medium_priority_weight": 0, 00:15:50.426 "high_priority_weight": 0, 00:15:50.426 "nvme_adminq_poll_period_us": 10000, 00:15:50.426 "nvme_ioq_poll_period_us": 0, 00:15:50.426 "io_queue_requests": 0, 00:15:50.426 "delay_cmd_submit": true, 00:15:50.426 "transport_retry_count": 4, 00:15:50.426 "bdev_retry_count": 3, 00:15:50.426 "transport_ack_timeout": 0, 00:15:50.426 "ctrlr_loss_timeout_sec": 0, 00:15:50.426 "reconnect_delay_sec": 0, 00:15:50.426 "fast_io_fail_timeout_sec": 0, 00:15:50.426 "disable_auto_failback": false, 00:15:50.426 "generate_uuids": false, 00:15:50.426 "transport_tos": 0, 00:15:50.426 "nvme_error_stat": false, 00:15:50.426 "rdma_srq_size": 0, 00:15:50.426 "io_path_stat": false, 00:15:50.426 "allow_accel_sequence": false, 00:15:50.426 "rdma_max_cq_size": 0, 00:15:50.426 "rdma_cm_event_timeout_ms": 0, 00:15:50.426 "dhchap_digests": [ 00:15:50.426 "sha256", 00:15:50.426 "sha384", 00:15:50.426 "sha512" 00:15:50.426 ], 00:15:50.426 "dhchap_dhgroups": [ 00:15:50.426 "null", 00:15:50.426 "ffdhe2048", 00:15:50.426 "ffdhe3072", 00:15:50.426 "ffdhe4096", 00:15:50.426 "ffdhe6144", 00:15:50.426 "ffdhe8192" 00:15:50.426 ] 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "bdev_nvme_set_hotplug", 00:15:50.426 "params": { 00:15:50.426 "period_us": 100000, 00:15:50.426 "enable": false 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "bdev_malloc_create", 00:15:50.426 "params": { 00:15:50.426 "name": "malloc0", 00:15:50.426 "num_blocks": 8192, 00:15:50.426 "block_size": 4096, 00:15:50.426 "physical_block_size": 4096, 00:15:50.426 "uuid": "568c6570-2c64-4fa2-a90f-eb4364f2e313", 00:15:50.426 "optimal_io_boundary": 0 00:15:50.426 } 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "method": "bdev_wait_for_examine" 00:15:50.426 } 00:15:50.426 ] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "scsi", 00:15:50.426 "config": null 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "scheduler", 00:15:50.426 "config": [ 00:15:50.426 { 00:15:50.426 "method": "framework_set_scheduler", 00:15:50.426 "params": { 00:15:50.426 "name": "static" 00:15:50.426 } 00:15:50.426 } 00:15:50.426 ] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "vhost_scsi", 00:15:50.426 "config": [] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "vhost_blk", 00:15:50.426 "config": [] 00:15:50.426 }, 00:15:50.426 { 00:15:50.426 "subsystem": "ublk", 00:15:50.426 "config": [ 00:15:50.426 { 00:15:50.426 "method": "ublk_create_target", 00:15:50.426 "params": { 00:15:50.426 "cpumask": "1" 00:15:50.426 } 00:15:50.427 }, 00:15:50.427 { 00:15:50.427 "method": "ublk_start_disk", 00:15:50.427 "params": { 00:15:50.427 "bdev_name": "malloc0", 00:15:50.427 "ublk_id": 0, 00:15:50.427 "num_queues": 1, 00:15:50.427 "queue_depth": 128 00:15:50.427 } 00:15:50.427 } 00:15:50.427 ] 00:15:50.427 }, 00:15:50.427 { 00:15:50.427 "subsystem": "nbd", 00:15:50.427 "config": [] 00:15:50.427 }, 00:15:50.427 { 00:15:50.427 "subsystem": "nvmf", 00:15:50.427 "config": [ 00:15:50.427 { 00:15:50.427 "method": "nvmf_set_config", 00:15:50.427 "params": { 00:15:50.427 "discovery_filter": "match_any", 00:15:50.427 "admin_cmd_passthru": { 00:15:50.427 "identify_ctrlr": false 00:15:50.427 } 00:15:50.427 } 00:15:50.427 }, 00:15:50.427 { 00:15:50.427 "method": "nvmf_set_max_subsystems", 00:15:50.427 "params": { 00:15:50.427 "max_subsystems": 1024 00:15:50.427 } 00:15:50.427 }, 00:15:50.427 { 00:15:50.427 "method": "nvmf_set_crdt", 00:15:50.427 "params": { 00:15:50.427 "crdt1": 0, 00:15:50.427 "crdt2": 0, 00:15:50.427 "crdt3": 0 00:15:50.427 } 00:15:50.427 } 00:15:50.427 ] 00:15:50.427 }, 00:15:50.427 { 00:15:50.427 "subsystem": "iscsi", 00:15:50.427 "config": [ 00:15:50.427 { 00:15:50.427 "method": "iscsi_set_options", 00:15:50.427 "params": { 00:15:50.427 "node_base": "iqn.2016-06.io.spdk", 00:15:50.427 "max_sessions": 128, 00:15:50.427 "max_connections_per_session": 2, 00:15:50.427 "max_queue_depth": 64, 00:15:50.427 "default_time2wait": 2, 00:15:50.427 "default_time2retain": 20, 00:15:50.427 "first_burst_length": 8192, 00:15:50.427 "immediate_data": true, 00:15:50.427 "allow_duplicated_isid": false, 00:15:50.427 "error_recovery_level": 0, 00:15:50.427 "nop_timeout": 60, 00:15:50.427 "nop_in_interval": 30, 00:15:50.427 "disable_chap": false, 00:15:50.427 "require_chap": false, 00:15:50.427 "mutual_chap": false, 00:15:50.427 "chap_group": 0, 00:15:50.427 "max_large_datain_per_connection": 64, 00:15:50.427 "max_r2t_per_connection": 4, 00:15:50.427 "pdu_pool_size": 36864, 00:15:50.427 "immediate_data_pool_size": 16384, 00:15:50.427 "data_out_pool_size": 2048 00:15:50.427 } 00:15:50.427 } 00:15:50.427 ] 00:15:50.427 } 00:15:50.427 ] 00:15:50.427 }' 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 89628 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 89628 ']' 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 89628 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89628 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.427 killing process with pid 89628 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89628' 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 89628 00:15:50.427 20:32:44 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 89628 00:15:50.992 [2024-07-12 20:32:44.904139] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:50.992 [2024-07-12 20:32:44.940319] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:50.993 [2024-07-12 20:32:44.940507] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:50.993 [2024-07-12 20:32:44.948274] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:50.993 [2024-07-12 20:32:44.948347] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:50.993 [2024-07-12 20:32:44.948367] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:50.993 [2024-07-12 20:32:44.948413] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:50.993 [2024-07-12 20:32:44.948604] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=89665 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 89665 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 89665 ']' 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.251 20:32:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:51.251 "subsystems": [ 00:15:51.251 { 00:15:51.251 "subsystem": "keyring", 00:15:51.251 "config": [] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "iobuf", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "iobuf_set_options", 00:15:51.251 "params": { 00:15:51.251 "small_pool_count": 8192, 00:15:51.251 "large_pool_count": 1024, 00:15:51.251 "small_bufsize": 8192, 00:15:51.251 "large_bufsize": 135168 00:15:51.251 } 00:15:51.251 } 00:15:51.251 ] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "sock", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "sock_set_default_impl", 00:15:51.251 "params": { 00:15:51.251 "impl_name": "posix" 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "sock_impl_set_options", 00:15:51.251 "params": { 00:15:51.251 "impl_name": "ssl", 00:15:51.251 "recv_buf_size": 4096, 00:15:51.251 "send_buf_size": 4096, 00:15:51.251 "enable_recv_pipe": true, 00:15:51.251 "enable_quickack": false, 00:15:51.251 "enable_placement_id": 0, 00:15:51.251 "enable_zerocopy_send_server": true, 00:15:51.251 "enable_zerocopy_send_client": false, 00:15:51.251 "zerocopy_threshold": 0, 00:15:51.251 "tls_version": 0, 00:15:51.251 "enable_ktls": false 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "sock_impl_set_options", 00:15:51.251 "params": { 00:15:51.251 "impl_name": "posix", 00:15:51.251 "recv_buf_size": 2097152, 00:15:51.251 "send_buf_size": 2097152, 00:15:51.251 "enable_recv_pipe": true, 00:15:51.251 "enable_quickack": false, 00:15:51.251 "enable_placement_id": 0, 00:15:51.251 "enable_zerocopy_send_server": true, 00:15:51.251 "enable_zerocopy_send_client": false, 00:15:51.251 "zerocopy_threshold": 0, 00:15:51.251 "tls_version": 0, 00:15:51.251 "enable_ktls": false 00:15:51.251 } 00:15:51.251 } 00:15:51.251 ] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "vmd", 00:15:51.251 "config": [] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "accel", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "accel_set_options", 00:15:51.251 "params": { 00:15:51.251 "small_cache_size": 128, 00:15:51.251 "large_cache_size": 16, 00:15:51.251 "task_count": 2048, 00:15:51.251 "sequence_count": 2048, 00:15:51.251 "buf_count": 2048 00:15:51.251 } 00:15:51.251 } 00:15:51.251 ] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "bdev", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "bdev_set_options", 00:15:51.251 "params": { 00:15:51.251 "bdev_io_pool_size": 65535, 00:15:51.251 "bdev_io_cache_size": 256, 00:15:51.251 "bdev_auto_examine": true, 00:15:51.251 "iobuf_small_cache_size": 128, 00:15:51.251 "iobuf_large_cache_size": 16 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "bdev_raid_set_options", 00:15:51.251 "params": { 00:15:51.251 "process_window_size_kb": 1024 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "bdev_iscsi_set_options", 00:15:51.251 "params": { 00:15:51.251 "timeout_sec": 30 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "bdev_nvme_set_options", 00:15:51.251 "params": { 00:15:51.251 "action_on_timeout": "none", 00:15:51.251 "timeout_us": 0, 00:15:51.251 "timeout_admin_us": 0, 00:15:51.251 "keep_alive_timeout_ms": 10000, 00:15:51.251 "arbitration_burst": 0, 00:15:51.251 "low_priority_weight": 0, 00:15:51.251 "medium_priority_weight": 0, 00:15:51.251 "high_priority_weight": 0, 00:15:51.251 "nvme_adminq_poll_period_us": 10000, 00:15:51.251 "nvme_ioq_poll_period_us": 0, 00:15:51.251 "io_queue_requests": 0, 00:15:51.251 "delay_cmd_submit": true, 00:15:51.251 "transport_retry_count": 4, 00:15:51.251 "bdev_retry_count": 3, 00:15:51.251 "transport_ack_timeout": 0, 00:15:51.251 "ctrlr_loss_timeout_sec": 0, 00:15:51.251 "reconnect_delay_sec": 0, 00:15:51.251 "fast_io_fail_timeout_sec": 0, 00:15:51.251 "disable_auto_failback": false, 00:15:51.251 "generate_uuids": false, 00:15:51.251 "transport_tos": 0, 00:15:51.251 "nvme_error_stat": false, 00:15:51.251 "rdma_srq_size": 0, 00:15:51.251 "io_path_stat": false, 00:15:51.251 "allow_accel_sequence": false, 00:15:51.251 "rdma_max_cq_size": 0, 00:15:51.251 "rdma_cm_event_timeout_ms": 0, 00:15:51.251 "dhchap_digests": [ 00:15:51.251 "sha256", 00:15:51.251 "sha384", 00:15:51.251 "sha512" 00:15:51.251 ], 00:15:51.251 "dhchap_dhgroups": [ 00:15:51.251 "null", 00:15:51.251 "ffdhe2048", 00:15:51.251 "ffdhe3072", 00:15:51.251 "ffdhe4096", 00:15:51.251 "ffdhe6144", 00:15:51.251 "ffdhe8192" 00:15:51.251 ] 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "bdev_nvme_set_hotplug", 00:15:51.251 "params": { 00:15:51.251 "period_us": 100000, 00:15:51.251 "enable": false 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "bdev_malloc_create", 00:15:51.251 "params": { 00:15:51.251 "name": "malloc0", 00:15:51.251 "num_blocks": 8192, 00:15:51.251 "block_size": 4096, 00:15:51.251 "physical_block_size": 4096, 00:15:51.251 "uuid": "568c6570-2c64-4fa2-a90f-eb4364f2e313", 00:15:51.251 "optimal_io_boundary": 0 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "bdev_wait_for_examine" 00:15:51.251 } 00:15:51.251 ] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "scsi", 00:15:51.251 "config": null 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "scheduler", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "framework_set_scheduler", 00:15:51.251 "params": { 00:15:51.251 "name": "static" 00:15:51.251 } 00:15:51.251 } 00:15:51.251 ] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "vhost_scsi", 00:15:51.251 "config": [] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "vhost_blk", 00:15:51.251 "config": [] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "ublk", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "ublk_create_target", 00:15:51.251 "params": { 00:15:51.251 "cpumask": "1" 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "ublk_start_disk", 00:15:51.251 "params": { 00:15:51.251 "bdev_name": "malloc0", 00:15:51.251 "ublk_id": 0, 00:15:51.251 "num_queues": 1, 00:15:51.251 "queue_depth": 128 00:15:51.251 } 00:15:51.251 } 00:15:51.251 ] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "nbd", 00:15:51.251 "config": [] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "nvmf", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "nvmf_set_config", 00:15:51.251 "params": { 00:15:51.251 "discovery_filter": "match_any", 00:15:51.251 "admin_cmd_passthru": { 00:15:51.251 "identify_ctrlr": false 00:15:51.251 } 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "nvmf_set_max_subsystems", 00:15:51.251 "params": { 00:15:51.251 "max_subsystems": 1024 00:15:51.251 } 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "method": "nvmf_set_crdt", 00:15:51.251 "params": { 00:15:51.251 "crdt1": 0, 00:15:51.251 "crdt2": 0, 00:15:51.251 "crdt3": 0 00:15:51.251 } 00:15:51.251 } 00:15:51.251 ] 00:15:51.251 }, 00:15:51.251 { 00:15:51.251 "subsystem": "iscsi", 00:15:51.251 "config": [ 00:15:51.251 { 00:15:51.251 "method": "iscsi_set_options", 00:15:51.251 "params": { 00:15:51.252 "node_base": "iqn.2016-06.io.spdk", 00:15:51.252 "max_sessions": 128, 00:15:51.252 "max_connections_per_session": 2, 00:15:51.252 "max_queue_depth": 64, 00:15:51.252 "default_time2wait": 2, 00:15:51.252 "default_time2retain": 20, 00:15:51.252 "first_burst_length": 8192, 00:15:51.252 "immediate_data": true, 00:15:51.252 "allow_duplicated_isid": false, 00:15:51.252 "error_recovery_level": 0, 00:15:51.252 "nop_timeout": 60, 00:15:51.252 "nop_in_interval": 30, 00:15:51.252 "disable_chap": false, 00:15:51.252 "require_chap": false, 00:15:51.252 "mutual_chap": false, 00:15:51.252 "chap_group": 0, 00:15:51.252 "max_large_datain_per_connection": 64, 00:15:51.252 "max_r2t_per_connection": 4, 00:15:51.252 "pdu_pool_size": 36864, 00:15:51.252 "immediate_data_pool_size": 16384, 00:15:51.252 "data_out_pool_size": 2048 00:15:51.252 } 00:15:51.252 } 00:15:51.252 ] 00:15:51.252 } 00:15:51.252 ] 00:15:51.252 }' 00:15:51.252 20:32:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:51.252 [2024-07-12 20:32:45.360141] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:51.252 [2024-07-12 20:32:45.360400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89665 ] 00:15:51.510 [2024-07-12 20:32:45.511023] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:51.510 [2024-07-12 20:32:45.530672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.510 [2024-07-12 20:32:45.633156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.081 [2024-07-12 20:32:46.022289] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:52.081 [2024-07-12 20:32:46.022698] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:52.081 [2024-07-12 20:32:46.030430] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:52.081 [2024-07-12 20:32:46.030536] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:52.081 [2024-07-12 20:32:46.030555] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:52.081 [2024-07-12 20:32:46.030576] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:52.081 [2024-07-12 20:32:46.037289] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:52.081 [2024-07-12 20:32:46.037317] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:52.081 [2024-07-12 20:32:46.044308] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:52.081 [2024-07-12 20:32:46.044449] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:52.081 [2024-07-12 20:32:46.061271] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:52.081 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.081 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:52.081 20:32:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:52.081 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.081 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:52.081 20:32:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 89665 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 89665 ']' 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 89665 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89665 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.360 killing process with pid 89665 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89665' 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 89665 00:15:52.360 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 89665 00:15:52.619 [2024-07-12 20:32:46.648683] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:52.619 [2024-07-12 20:32:46.679391] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:52.619 [2024-07-12 20:32:46.679612] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:52.619 [2024-07-12 20:32:46.685281] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:52.619 [2024-07-12 20:32:46.685344] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:52.619 [2024-07-12 20:32:46.685369] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:52.619 [2024-07-12 20:32:46.685419] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:52.619 [2024-07-12 20:32:46.685605] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:52.877 20:32:46 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:52.877 00:15:52.877 real 0m3.818s 00:15:52.877 user 0m3.013s 00:15:52.877 sys 0m1.663s 00:15:52.877 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.877 20:32:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:52.877 ************************************ 00:15:52.877 END TEST test_save_ublk_config 00:15:52.877 ************************************ 00:15:53.135 20:32:47 ublk -- common/autotest_common.sh@1142 -- # return 0 00:15:53.135 20:32:47 ublk -- ublk/ublk.sh@139 -- # spdk_pid=89717 00:15:53.135 20:32:47 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:53.135 20:32:47 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:53.135 20:32:47 ublk -- ublk/ublk.sh@141 -- # waitforlisten 89717 00:15:53.135 20:32:47 ublk -- common/autotest_common.sh@829 -- # '[' -z 89717 ']' 00:15:53.135 20:32:47 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.135 20:32:47 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.135 20:32:47 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.135 20:32:47 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.135 20:32:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.135 [2024-07-12 20:32:47.126623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:53.135 [2024-07-12 20:32:47.126823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89717 ] 00:15:53.135 [2024-07-12 20:32:47.270911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:53.393 [2024-07-12 20:32:47.291370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.393 [2024-07-12 20:32:47.381994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.393 [2024-07-12 20:32:47.382045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.958 20:32:48 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.958 20:32:48 ublk -- common/autotest_common.sh@862 -- # return 0 00:15:53.958 20:32:48 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:53.958 20:32:48 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:53.958 20:32:48 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.958 20:32:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.958 ************************************ 00:15:53.958 START TEST test_create_ublk 00:15:53.958 ************************************ 00:15:53.958 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:15:53.958 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:53.958 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.958 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.958 [2024-07-12 20:32:48.055265] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:53.958 [2024-07-12 20:32:48.057142] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:53.958 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.958 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:53.958 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:53.958 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.958 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.215 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:54.215 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.215 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.215 [2024-07-12 20:32:48.143588] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:54.215 [2024-07-12 20:32:48.144105] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:54.215 [2024-07-12 20:32:48.144132] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:54.215 [2024-07-12 20:32:48.144147] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:54.215 [2024-07-12 20:32:48.151612] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:54.215 [2024-07-12 20:32:48.151649] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:54.215 [2024-07-12 20:32:48.158270] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:54.215 [2024-07-12 20:32:48.169419] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:54.215 [2024-07-12 20:32:48.184389] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.215 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:54.215 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.215 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.215 20:32:48 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:54.215 { 00:15:54.215 "ublk_device": "/dev/ublkb0", 00:15:54.215 "id": 0, 00:15:54.215 "queue_depth": 512, 00:15:54.215 "num_queues": 4, 00:15:54.215 "bdev_name": "Malloc0" 00:15:54.215 } 00:15:54.215 ]' 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:54.215 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:54.473 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:54.473 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:54.473 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:54.473 20:32:48 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:54.473 20:32:48 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:54.473 fio: verification read phase will never start because write phase uses all of runtime 00:15:54.473 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:54.473 fio-3.35 00:15:54.473 Starting 1 process 00:16:06.673 00:16:06.673 fio_test: (groupid=0, jobs=1): err= 0: pid=89756: Fri Jul 12 20:32:58 2024 00:16:06.673 write: IOPS=10.8k, BW=42.0MiB/s (44.1MB/s)(420MiB/10001msec); 0 zone resets 00:16:06.673 clat (usec): min=55, max=7995, avg=91.88, stdev=161.01 00:16:06.673 lat (usec): min=55, max=8000, avg=92.49, stdev=161.03 00:16:06.673 clat percentiles (usec): 00:16:06.673 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 78], 00:16:06.673 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:16:06.673 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 95], 95.00th=[ 101], 00:16:06.673 | 99.00th=[ 119], 99.50th=[ 141], 99.90th=[ 3228], 99.95th=[ 3589], 00:16:06.673 | 99.99th=[ 4113] 00:16:06.673 bw ( KiB/s): min=18856, max=45904, per=99.95%, avg=42997.47, stdev=5929.63, samples=19 00:16:06.673 iops : min= 4714, max=11476, avg=10749.47, stdev=1482.44, samples=19 00:16:06.673 lat (usec) : 100=94.73%, 250=4.86%, 500=0.02%, 750=0.02%, 1000=0.04% 00:16:06.673 lat (msec) : 2=0.10%, 4=0.22%, 10=0.01% 00:16:06.673 cpu : usr=1.85%, sys=5.87%, ctx=107560, majf=0, minf=796 00:16:06.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.673 issued rwts: total=0,107557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.673 00:16:06.673 Run status group 0 (all jobs): 00:16:06.673 WRITE: bw=42.0MiB/s (44.1MB/s), 42.0MiB/s-42.0MiB/s (44.1MB/s-44.1MB/s), io=420MiB (441MB), run=10001-10001msec 00:16:06.673 00:16:06.673 Disk stats (read/write): 00:16:06.673 ublkb0: ios=0/106413, merge=0/0, ticks=0/9158, in_queue=9159, util=99.11% 00:16:06.673 20:32:58 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 [2024-07-12 20:32:58.702464] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.673 [2024-07-12 20:32:58.758299] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.673 [2024-07-12 20:32:58.759552] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.673 [2024-07-12 20:32:58.766349] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.673 [2024-07-12 20:32:58.766677] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:06.673 [2024-07-12 20:32:58.766706] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.673 20:32:58 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 [2024-07-12 20:32:58.782398] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:06.673 request: 00:16:06.673 { 00:16:06.673 "ublk_id": 0, 00:16:06.673 "method": "ublk_stop_disk", 00:16:06.673 "req_id": 1 00:16:06.673 } 00:16:06.673 Got JSON-RPC error response 00:16:06.673 response: 00:16:06.673 { 00:16:06.673 "code": -19, 00:16:06.673 "message": "No such device" 00:16:06.673 } 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:06.673 20:32:58 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 [2024-07-12 20:32:58.796389] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:06.673 [2024-07-12 20:32:58.798368] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:06.673 [2024-07-12 20:32:58.798411] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.673 20:32:58 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.673 20:32:58 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:06.673 ************************************ 00:16:06.673 END TEST test_create_ublk 00:16:06.673 ************************************ 00:16:06.673 20:32:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:06.673 00:16:06.673 real 0m10.942s 00:16:06.673 user 0m0.624s 00:16:06.673 sys 0m0.676s 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.673 20:32:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 20:32:59 ublk -- common/autotest_common.sh@1142 -- # return 0 00:16:06.673 20:32:59 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:06.673 20:32:59 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:06.673 20:32:59 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.673 20:32:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 ************************************ 00:16:06.674 START TEST test_create_multi_ublk 00:16:06.674 ************************************ 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 [2024-07-12 20:32:59.052298] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:06.674 [2024-07-12 20:32:59.054136] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 [2024-07-12 20:32:59.154548] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:06.674 [2024-07-12 20:32:59.155133] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:06.674 [2024-07-12 20:32:59.155160] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:06.674 [2024-07-12 20:32:59.155171] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.674 [2024-07-12 20:32:59.163685] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.674 [2024-07-12 20:32:59.163718] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.674 [2024-07-12 20:32:59.169320] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.674 [2024-07-12 20:32:59.170153] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:06.674 [2024-07-12 20:32:59.181378] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 [2024-07-12 20:32:59.284553] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:06.674 [2024-07-12 20:32:59.285059] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:06.674 [2024-07-12 20:32:59.285081] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:06.674 [2024-07-12 20:32:59.285095] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.674 [2024-07-12 20:32:59.292280] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.674 [2024-07-12 20:32:59.292309] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.674 [2024-07-12 20:32:59.302354] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.674 [2024-07-12 20:32:59.303142] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:06.674 [2024-07-12 20:32:59.311300] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 [2024-07-12 20:32:59.413440] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:06.674 [2024-07-12 20:32:59.414001] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:06.674 [2024-07-12 20:32:59.414031] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:06.674 [2024-07-12 20:32:59.414042] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.674 [2024-07-12 20:32:59.421303] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.674 [2024-07-12 20:32:59.421327] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.674 [2024-07-12 20:32:59.428301] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.674 [2024-07-12 20:32:59.429101] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:06.674 [2024-07-12 20:32:59.440301] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 [2024-07-12 20:32:59.540515] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:06.674 [2024-07-12 20:32:59.541067] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:06.674 [2024-07-12 20:32:59.541092] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:06.674 [2024-07-12 20:32:59.541106] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.674 [2024-07-12 20:32:59.548285] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.674 [2024-07-12 20:32:59.548319] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.674 [2024-07-12 20:32:59.558317] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.674 [2024-07-12 20:32:59.559090] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:06.674 [2024-07-12 20:32:59.567303] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:06.674 { 00:16:06.674 "ublk_device": "/dev/ublkb0", 00:16:06.674 "id": 0, 00:16:06.674 "queue_depth": 512, 00:16:06.674 "num_queues": 4, 00:16:06.674 "bdev_name": "Malloc0" 00:16:06.674 }, 00:16:06.674 { 00:16:06.674 "ublk_device": "/dev/ublkb1", 00:16:06.674 "id": 1, 00:16:06.674 "queue_depth": 512, 00:16:06.674 "num_queues": 4, 00:16:06.674 "bdev_name": "Malloc1" 00:16:06.674 }, 00:16:06.674 { 00:16:06.674 "ublk_device": "/dev/ublkb2", 00:16:06.674 "id": 2, 00:16:06.674 "queue_depth": 512, 00:16:06.674 "num_queues": 4, 00:16:06.674 "bdev_name": "Malloc2" 00:16:06.674 }, 00:16:06.674 { 00:16:06.674 "ublk_device": "/dev/ublkb3", 00:16:06.674 "id": 3, 00:16:06.674 "queue_depth": 512, 00:16:06.674 "num_queues": 4, 00:16:06.674 "bdev_name": "Malloc3" 00:16:06.674 } 00:16:06.674 ]' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:06.674 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:06.675 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:06.675 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:06.675 20:32:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.675 [2024-07-12 20:33:00.586481] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.675 [2024-07-12 20:33:00.630343] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.675 [2024-07-12 20:33:00.635599] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.675 [2024-07-12 20:33:00.647525] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.675 [2024-07-12 20:33:00.647870] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:06.675 [2024-07-12 20:33:00.647892] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.675 [2024-07-12 20:33:00.665465] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.675 [2024-07-12 20:33:00.698366] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.675 [2024-07-12 20:33:00.699645] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.675 [2024-07-12 20:33:00.705259] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.675 [2024-07-12 20:33:00.705603] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:06.675 [2024-07-12 20:33:00.705623] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.675 [2024-07-12 20:33:00.711554] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.675 [2024-07-12 20:33:00.756380] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.675 [2024-07-12 20:33:00.757652] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.675 [2024-07-12 20:33:00.766276] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.675 [2024-07-12 20:33:00.766622] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:06.675 [2024-07-12 20:33:00.766642] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.675 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.675 [2024-07-12 20:33:00.773458] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.675 [2024-07-12 20:33:00.819878] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.933 [2024-07-12 20:33:00.821345] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.933 [2024-07-12 20:33:00.826279] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.933 [2024-07-12 20:33:00.826590] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:06.933 [2024-07-12 20:33:00.826611] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:06.933 20:33:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.933 20:33:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:06.933 [2024-07-12 20:33:01.075404] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:06.933 [2024-07-12 20:33:01.076764] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:06.933 [2024-07-12 20:33:01.076844] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.190 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.191 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:07.191 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.191 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:07.449 ************************************ 00:16:07.449 END TEST test_create_multi_ublk 00:16:07.449 ************************************ 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:07.449 00:16:07.449 real 0m2.442s 00:16:07.449 user 0m1.269s 00:16:07.449 sys 0m0.152s 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.449 20:33:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@1142 -- # return 0 00:16:07.449 20:33:01 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:07.449 20:33:01 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:07.449 20:33:01 ublk -- ublk/ublk.sh@130 -- # killprocess 89717 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@948 -- # '[' -z 89717 ']' 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@952 -- # kill -0 89717 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@953 -- # uname 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89717 00:16:07.449 killing process with pid 89717 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89717' 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@967 -- # kill 89717 00:16:07.449 20:33:01 ublk -- common/autotest_common.sh@972 -- # wait 89717 00:16:07.707 [2024-07-12 20:33:01.722219] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:07.707 [2024-07-12 20:33:01.722341] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:07.965 00:16:07.965 real 0m18.933s 00:16:07.965 user 0m30.334s 00:16:07.965 sys 0m6.964s 00:16:07.965 20:33:01 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.965 ************************************ 00:16:07.965 END TEST ublk 00:16:07.965 ************************************ 00:16:07.965 20:33:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.965 20:33:02 -- common/autotest_common.sh@1142 -- # return 0 00:16:07.965 20:33:02 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:07.965 20:33:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:07.965 20:33:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.965 20:33:02 -- common/autotest_common.sh@10 -- # set +x 00:16:07.965 ************************************ 00:16:07.965 START TEST ublk_recovery 00:16:07.965 ************************************ 00:16:07.965 20:33:02 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:08.225 * Looking for test storage... 00:16:08.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:08.225 20:33:02 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:08.225 20:33:02 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:08.225 20:33:02 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:08.225 20:33:02 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=90060 00:16:08.225 20:33:02 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:08.225 20:33:02 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 90060 00:16:08.225 20:33:02 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:08.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.225 20:33:02 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 90060 ']' 00:16:08.225 20:33:02 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.225 20:33:02 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.225 20:33:02 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.225 20:33:02 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.225 20:33:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.225 [2024-07-12 20:33:02.249595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:08.225 [2024-07-12 20:33:02.249797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90060 ] 00:16:08.483 [2024-07-12 20:33:02.402697] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:08.483 [2024-07-12 20:33:02.421131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:08.483 [2024-07-12 20:33:02.516687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.483 [2024-07-12 20:33:02.516726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.049 20:33:03 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.049 20:33:03 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:09.049 20:33:03 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:09.049 20:33:03 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.049 20:33:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 [2024-07-12 20:33:03.184294] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:09.049 [2024-07-12 20:33:03.186237] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:09.049 20:33:03 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.049 20:33:03 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:09.049 20:33:03 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.049 20:33:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.307 malloc0 00:16:09.307 20:33:03 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.307 20:33:03 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:09.307 20:33:03 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.307 20:33:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.307 [2024-07-12 20:33:03.231762] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:09.307 [2024-07-12 20:33:03.231901] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:09.307 [2024-07-12 20:33:03.231920] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:09.307 [2024-07-12 20:33:03.231945] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:09.307 [2024-07-12 20:33:03.240407] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:09.307 [2024-07-12 20:33:03.240444] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:09.307 [2024-07-12 20:33:03.247280] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:09.307 [2024-07-12 20:33:03.247493] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:09.307 [2024-07-12 20:33:03.273287] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:09.307 1 00:16:09.307 20:33:03 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.307 20:33:03 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:10.243 20:33:04 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=90094 00:16:10.243 20:33:04 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:10.243 20:33:04 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:10.243 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.243 fio-3.35 00:16:10.243 Starting 1 process 00:16:15.510 20:33:09 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 90060 00:16:15.510 20:33:09 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:20.776 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 90060 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:20.776 20:33:14 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=90199 00:16:20.776 20:33:14 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:20.776 20:33:14 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.776 20:33:14 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 90199 00:16:20.776 20:33:14 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 90199 ']' 00:16:20.776 20:33:14 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.776 20:33:14 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.776 20:33:14 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.776 20:33:14 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.776 20:33:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.776 [2024-07-12 20:33:14.417660] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:20.776 [2024-07-12 20:33:14.417820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90199 ] 00:16:20.776 [2024-07-12 20:33:14.563640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:20.776 [2024-07-12 20:33:14.579807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:20.776 [2024-07-12 20:33:14.661337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.776 [2024-07-12 20:33:14.661356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:21.343 20:33:15 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 [2024-07-12 20:33:15.350275] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.343 [2024-07-12 20:33:15.355144] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.343 20:33:15 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 malloc0 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.343 20:33:15 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 [2024-07-12 20:33:15.400753] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:21.343 [2024-07-12 20:33:15.400989] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:21.343 [2024-07-12 20:33:15.401029] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:21.343 [2024-07-12 20:33:15.409307] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:21.343 [2024-07-12 20:33:15.409343] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:21.343 1 00:16:21.343 [2024-07-12 20:33:15.409467] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:21.343 20:33:15 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.343 20:33:15 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 90094 00:16:47.894 [2024-07-12 20:33:39.632288] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:47.894 [2024-07-12 20:33:39.639782] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:47.894 [2024-07-12 20:33:39.650632] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:47.894 [2024-07-12 20:33:39.650668] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:14.439 00:17:14.439 fio_test: (groupid=0, jobs=1): err= 0: pid=90097: Fri Jul 12 20:34:04 2024 00:17:14.439 read: IOPS=9934, BW=38.8MiB/s (40.7MB/s)(2328MiB/60002msec) 00:17:14.439 slat (nsec): min=1995, max=185309, avg=6388.28, stdev=2813.70 00:17:14.439 clat (usec): min=1245, max=30372k, avg=6587.86, stdev=324330.93 00:17:14.439 lat (usec): min=1266, max=30372k, avg=6594.24, stdev=324330.92 00:17:14.439 clat percentiles (msec): 00:17:14.439 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:14.439 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:17:14.439 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:14.439 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:17:14.439 | 99.99th=[17113] 00:17:14.439 bw ( KiB/s): min=31840, max=86832, per=100.00%, avg=79722.81, stdev=10160.68, samples=59 00:17:14.439 iops : min= 7960, max=21708, avg=19930.69, stdev=2540.17, samples=59 00:17:14.439 write: IOPS=9923, BW=38.8MiB/s (40.6MB/s)(2326MiB/60002msec); 0 zone resets 00:17:14.439 slat (usec): min=2, max=235, avg= 6.40, stdev= 2.96 00:17:14.439 clat (usec): min=1113, max=30372k, avg=6289.06, stdev=304828.89 00:17:14.439 lat (usec): min=1121, max=30372k, avg=6295.46, stdev=304828.89 00:17:14.439 clat percentiles (msec): 00:17:14.439 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:14.439 | 30.00th=[ 3], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:17:14.439 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:17:14.439 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:17:14.439 | 99.99th=[17113] 00:17:14.439 bw ( KiB/s): min=32144, max=86080, per=100.00%, avg=79642.12, stdev=10183.21, samples=59 00:17:14.439 iops : min= 8036, max=21520, avg=19910.51, stdev=2545.79, samples=59 00:17:14.439 lat (msec) : 2=0.08%, 4=94.77%, 10=5.11%, 20=0.03%, >=2000=0.01% 00:17:14.439 cpu : usr=5.17%, sys=12.03%, ctx=36375, majf=0, minf=13 00:17:14.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:14.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.439 issued rwts: total=596091,595428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.439 00:17:14.439 Run status group 0 (all jobs): 00:17:14.439 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=2328MiB (2442MB), run=60002-60002msec 00:17:14.439 WRITE: bw=38.8MiB/s (40.6MB/s), 38.8MiB/s-38.8MiB/s (40.6MB/s-40.6MB/s), io=2326MiB (2439MB), run=60002-60002msec 00:17:14.439 00:17:14.439 Disk stats (read/write): 00:17:14.439 ublkb1: ios=593973/593337, merge=0/0, ticks=3864168/3615771, in_queue=7479940, util=99.95% 00:17:14.439 20:34:04 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 [2024-07-12 20:34:04.555734] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:14.439 [2024-07-12 20:34:04.595450] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:14.439 [2024-07-12 20:34:04.595734] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:14.439 [2024-07-12 20:34:04.604330] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:14.439 [2024-07-12 20:34:04.604495] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:14.439 [2024-07-12 20:34:04.604519] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.439 20:34:04 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 [2024-07-12 20:34:04.620445] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:14.439 [2024-07-12 20:34:04.623384] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:14.439 [2024-07-12 20:34:04.623431] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.439 20:34:04 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:14.439 20:34:04 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:14.439 20:34:04 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 90199 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 90199 ']' 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 90199 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90199 00:17:14.439 killing process with pid 90199 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90199' 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@967 -- # kill 90199 00:17:14.439 20:34:04 ublk_recovery -- common/autotest_common.sh@972 -- # wait 90199 00:17:14.439 [2024-07-12 20:34:04.831459] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:14.439 [2024-07-12 20:34:04.831579] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:14.439 00:17:14.439 real 1m3.090s 00:17:14.439 user 1m47.973s 00:17:14.439 sys 0m18.128s 00:17:14.439 20:34:05 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.439 20:34:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 ************************************ 00:17:14.439 END TEST ublk_recovery 00:17:14.439 ************************************ 00:17:14.439 20:34:05 -- common/autotest_common.sh@1142 -- # return 0 00:17:14.439 20:34:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:14.439 20:34:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.439 20:34:05 -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 20:34:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:17:14.439 20:34:05 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:14.439 20:34:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:14.439 20:34:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.439 20:34:05 -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 ************************************ 00:17:14.439 START TEST ftl 00:17:14.439 ************************************ 00:17:14.439 20:34:05 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:14.439 * Looking for test storage... 00:17:14.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.439 20:34:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:14.439 20:34:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:14.439 20:34:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.439 20:34:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.439 20:34:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:14.439 20:34:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:14.439 20:34:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.439 20:34:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:14.439 20:34:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:14.439 20:34:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.439 20:34:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.439 20:34:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:14.439 20:34:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:14.439 20:34:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.439 20:34:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.439 20:34:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:14.439 20:34:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:14.439 20:34:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.439 20:34:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.439 20:34:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:14.439 20:34:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:14.439 20:34:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.440 20:34:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.440 20:34:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.440 20:34:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.440 20:34:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:14.440 20:34:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:14.440 20:34:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.440 20:34:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:14.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:14.440 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.440 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.440 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.440 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:14.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=90977 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:14.440 20:34:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 90977 00:17:14.440 20:34:05 ftl -- common/autotest_common.sh@829 -- # '[' -z 90977 ']' 00:17:14.440 20:34:05 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.440 20:34:05 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.440 20:34:05 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.440 20:34:05 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.440 20:34:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:14.440 [2024-07-12 20:34:05.972430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:14.440 [2024-07-12 20:34:05.972585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90977 ] 00:17:14.440 [2024-07-12 20:34:06.115514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:14.440 [2024-07-12 20:34:06.134438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.440 [2024-07-12 20:34:06.219198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.440 20:34:06 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.440 20:34:06 ftl -- common/autotest_common.sh@862 -- # return 0 00:17:14.440 20:34:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:14.440 20:34:07 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:14.440 20:34:07 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:14.440 20:34:07 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@50 -- # break 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:14.440 20:34:08 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:14.698 20:34:08 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:14.698 20:34:08 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:14.698 20:34:08 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:14.698 20:34:08 ftl -- ftl/ftl.sh@63 -- # break 00:17:14.698 20:34:08 ftl -- ftl/ftl.sh@66 -- # killprocess 90977 00:17:14.698 20:34:08 ftl -- common/autotest_common.sh@948 -- # '[' -z 90977 ']' 00:17:14.698 20:34:08 ftl -- common/autotest_common.sh@952 -- # kill -0 90977 00:17:14.698 20:34:08 ftl -- common/autotest_common.sh@953 -- # uname 00:17:14.698 20:34:08 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.698 20:34:08 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90977 00:17:14.698 killing process with pid 90977 00:17:14.699 20:34:08 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:14.699 20:34:08 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:14.699 20:34:08 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90977' 00:17:14.699 20:34:08 ftl -- common/autotest_common.sh@967 -- # kill 90977 00:17:14.699 20:34:08 ftl -- common/autotest_common.sh@972 -- # wait 90977 00:17:15.266 20:34:09 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:15.266 20:34:09 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:15.266 20:34:09 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:15.266 20:34:09 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.266 20:34:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:15.266 ************************************ 00:17:15.266 START TEST ftl_fio_basic 00:17:15.266 ************************************ 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:15.266 * Looking for test storage... 00:17:15.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:15.266 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=91090 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 91090 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 91090 ']' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.267 20:34:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:15.526 [2024-07-12 20:34:09.526861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:15.526 [2024-07-12 20:34:09.527087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91090 ] 00:17:15.784 [2024-07-12 20:34:09.681436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:15.784 [2024-07-12 20:34:09.699127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:15.784 [2024-07-12 20:34:09.791811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.784 [2024-07-12 20:34:09.791826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.784 [2024-07-12 20:34:09.791870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:16.351 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:16.918 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:16.918 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:16.918 20:34:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:16.919 20:34:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:16.919 20:34:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:16.919 20:34:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:16.919 20:34:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:16.919 20:34:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:16.919 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:16.919 { 00:17:16.919 "name": "nvme0n1", 00:17:16.919 "aliases": [ 00:17:16.919 "0f66ccc1-63c3-4e25-9154-b90543e96e1f" 00:17:16.919 ], 00:17:16.919 "product_name": "NVMe disk", 00:17:16.919 "block_size": 4096, 00:17:16.919 "num_blocks": 1310720, 00:17:16.919 "uuid": "0f66ccc1-63c3-4e25-9154-b90543e96e1f", 00:17:16.919 "assigned_rate_limits": { 00:17:16.919 "rw_ios_per_sec": 0, 00:17:16.919 "rw_mbytes_per_sec": 0, 00:17:16.919 "r_mbytes_per_sec": 0, 00:17:16.919 "w_mbytes_per_sec": 0 00:17:16.919 }, 00:17:16.919 "claimed": false, 00:17:16.919 "zoned": false, 00:17:16.919 "supported_io_types": { 00:17:16.919 "read": true, 00:17:16.919 "write": true, 00:17:16.919 "unmap": true, 00:17:16.919 "flush": true, 00:17:16.919 "reset": true, 00:17:16.919 "nvme_admin": true, 00:17:16.919 "nvme_io": true, 00:17:16.919 "nvme_io_md": false, 00:17:16.919 "write_zeroes": true, 00:17:16.919 "zcopy": false, 00:17:16.919 "get_zone_info": false, 00:17:16.919 "zone_management": false, 00:17:16.919 "zone_append": false, 00:17:16.919 "compare": true, 00:17:16.919 "compare_and_write": false, 00:17:16.919 "abort": true, 00:17:16.919 "seek_hole": false, 00:17:16.919 "seek_data": false, 00:17:16.919 "copy": true, 00:17:16.919 "nvme_iov_md": false 00:17:16.919 }, 00:17:16.919 "driver_specific": { 00:17:16.919 "nvme": [ 00:17:16.919 { 00:17:16.919 "pci_address": "0000:00:11.0", 00:17:16.919 "trid": { 00:17:16.919 "trtype": "PCIe", 00:17:16.919 "traddr": "0000:00:11.0" 00:17:16.919 }, 00:17:16.919 "ctrlr_data": { 00:17:16.919 "cntlid": 0, 00:17:16.919 "vendor_id": "0x1b36", 00:17:16.919 "model_number": "QEMU NVMe Ctrl", 00:17:16.919 "serial_number": "12341", 00:17:16.919 "firmware_revision": "8.0.0", 00:17:16.919 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:16.919 "oacs": { 00:17:16.919 "security": 0, 00:17:16.919 "format": 1, 00:17:16.919 "firmware": 0, 00:17:16.919 "ns_manage": 1 00:17:16.919 }, 00:17:16.919 "multi_ctrlr": false, 00:17:16.919 "ana_reporting": false 00:17:16.919 }, 00:17:16.919 "vs": { 00:17:16.919 "nvme_version": "1.4" 00:17:16.919 }, 00:17:16.919 "ns_data": { 00:17:16.919 "id": 1, 00:17:16.919 "can_share": false 00:17:16.919 } 00:17:16.919 } 00:17:16.919 ], 00:17:16.919 "mp_policy": "active_passive" 00:17:16.919 } 00:17:16.919 } 00:17:16.919 ]' 00:17:16.919 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:17.178 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:17.436 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:17.436 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:17.695 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0a0a2572-8e1e-495d-8138-f6334b6eab61 00:17:17.695 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0a0a2572-8e1e-495d-8138-f6334b6eab61 00:17:17.953 20:34:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:17.953 20:34:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:17.954 20:34:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:18.213 { 00:17:18.213 "name": "06323d61-1380-4ee3-8c2f-94b29ba84d20", 00:17:18.213 "aliases": [ 00:17:18.213 "lvs/nvme0n1p0" 00:17:18.213 ], 00:17:18.213 "product_name": "Logical Volume", 00:17:18.213 "block_size": 4096, 00:17:18.213 "num_blocks": 26476544, 00:17:18.213 "uuid": "06323d61-1380-4ee3-8c2f-94b29ba84d20", 00:17:18.213 "assigned_rate_limits": { 00:17:18.213 "rw_ios_per_sec": 0, 00:17:18.213 "rw_mbytes_per_sec": 0, 00:17:18.213 "r_mbytes_per_sec": 0, 00:17:18.213 "w_mbytes_per_sec": 0 00:17:18.213 }, 00:17:18.213 "claimed": false, 00:17:18.213 "zoned": false, 00:17:18.213 "supported_io_types": { 00:17:18.213 "read": true, 00:17:18.213 "write": true, 00:17:18.213 "unmap": true, 00:17:18.213 "flush": false, 00:17:18.213 "reset": true, 00:17:18.213 "nvme_admin": false, 00:17:18.213 "nvme_io": false, 00:17:18.213 "nvme_io_md": false, 00:17:18.213 "write_zeroes": true, 00:17:18.213 "zcopy": false, 00:17:18.213 "get_zone_info": false, 00:17:18.213 "zone_management": false, 00:17:18.213 "zone_append": false, 00:17:18.213 "compare": false, 00:17:18.213 "compare_and_write": false, 00:17:18.213 "abort": false, 00:17:18.213 "seek_hole": true, 00:17:18.213 "seek_data": true, 00:17:18.213 "copy": false, 00:17:18.213 "nvme_iov_md": false 00:17:18.213 }, 00:17:18.213 "driver_specific": { 00:17:18.213 "lvol": { 00:17:18.213 "lvol_store_uuid": "0a0a2572-8e1e-495d-8138-f6334b6eab61", 00:17:18.213 "base_bdev": "nvme0n1", 00:17:18.213 "thin_provision": true, 00:17:18.213 "num_allocated_clusters": 0, 00:17:18.213 "snapshot": false, 00:17:18.213 "clone": false, 00:17:18.213 "esnap_clone": false 00:17:18.213 } 00:17:18.213 } 00:17:18.213 } 00:17:18.213 ]' 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:18.213 20:34:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:18.778 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:19.036 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:19.036 { 00:17:19.037 "name": "06323d61-1380-4ee3-8c2f-94b29ba84d20", 00:17:19.037 "aliases": [ 00:17:19.037 "lvs/nvme0n1p0" 00:17:19.037 ], 00:17:19.037 "product_name": "Logical Volume", 00:17:19.037 "block_size": 4096, 00:17:19.037 "num_blocks": 26476544, 00:17:19.037 "uuid": "06323d61-1380-4ee3-8c2f-94b29ba84d20", 00:17:19.037 "assigned_rate_limits": { 00:17:19.037 "rw_ios_per_sec": 0, 00:17:19.037 "rw_mbytes_per_sec": 0, 00:17:19.037 "r_mbytes_per_sec": 0, 00:17:19.037 "w_mbytes_per_sec": 0 00:17:19.037 }, 00:17:19.037 "claimed": false, 00:17:19.037 "zoned": false, 00:17:19.037 "supported_io_types": { 00:17:19.037 "read": true, 00:17:19.037 "write": true, 00:17:19.037 "unmap": true, 00:17:19.037 "flush": false, 00:17:19.037 "reset": true, 00:17:19.037 "nvme_admin": false, 00:17:19.037 "nvme_io": false, 00:17:19.037 "nvme_io_md": false, 00:17:19.037 "write_zeroes": true, 00:17:19.037 "zcopy": false, 00:17:19.037 "get_zone_info": false, 00:17:19.037 "zone_management": false, 00:17:19.037 "zone_append": false, 00:17:19.037 "compare": false, 00:17:19.037 "compare_and_write": false, 00:17:19.037 "abort": false, 00:17:19.037 "seek_hole": true, 00:17:19.037 "seek_data": true, 00:17:19.037 "copy": false, 00:17:19.037 "nvme_iov_md": false 00:17:19.037 }, 00:17:19.037 "driver_specific": { 00:17:19.037 "lvol": { 00:17:19.037 "lvol_store_uuid": "0a0a2572-8e1e-495d-8138-f6334b6eab61", 00:17:19.037 "base_bdev": "nvme0n1", 00:17:19.037 "thin_provision": true, 00:17:19.037 "num_allocated_clusters": 0, 00:17:19.037 "snapshot": false, 00:17:19.037 "clone": false, 00:17:19.037 "esnap_clone": false 00:17:19.037 } 00:17:19.037 } 00:17:19.037 } 00:17:19.037 ]' 00:17:19.037 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:19.037 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:19.037 20:34:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:19.037 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:19.037 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:19.037 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:19.037 20:34:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:19.037 20:34:13 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:19.295 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:19.295 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06323d61-1380-4ee3-8c2f-94b29ba84d20 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:19.553 { 00:17:19.553 "name": "06323d61-1380-4ee3-8c2f-94b29ba84d20", 00:17:19.553 "aliases": [ 00:17:19.553 "lvs/nvme0n1p0" 00:17:19.553 ], 00:17:19.553 "product_name": "Logical Volume", 00:17:19.553 "block_size": 4096, 00:17:19.553 "num_blocks": 26476544, 00:17:19.553 "uuid": "06323d61-1380-4ee3-8c2f-94b29ba84d20", 00:17:19.553 "assigned_rate_limits": { 00:17:19.553 "rw_ios_per_sec": 0, 00:17:19.553 "rw_mbytes_per_sec": 0, 00:17:19.553 "r_mbytes_per_sec": 0, 00:17:19.553 "w_mbytes_per_sec": 0 00:17:19.553 }, 00:17:19.553 "claimed": false, 00:17:19.553 "zoned": false, 00:17:19.553 "supported_io_types": { 00:17:19.553 "read": true, 00:17:19.553 "write": true, 00:17:19.553 "unmap": true, 00:17:19.553 "flush": false, 00:17:19.553 "reset": true, 00:17:19.553 "nvme_admin": false, 00:17:19.553 "nvme_io": false, 00:17:19.553 "nvme_io_md": false, 00:17:19.553 "write_zeroes": true, 00:17:19.553 "zcopy": false, 00:17:19.553 "get_zone_info": false, 00:17:19.553 "zone_management": false, 00:17:19.553 "zone_append": false, 00:17:19.553 "compare": false, 00:17:19.553 "compare_and_write": false, 00:17:19.553 "abort": false, 00:17:19.553 "seek_hole": true, 00:17:19.553 "seek_data": true, 00:17:19.553 "copy": false, 00:17:19.553 "nvme_iov_md": false 00:17:19.553 }, 00:17:19.553 "driver_specific": { 00:17:19.553 "lvol": { 00:17:19.553 "lvol_store_uuid": "0a0a2572-8e1e-495d-8138-f6334b6eab61", 00:17:19.553 "base_bdev": "nvme0n1", 00:17:19.553 "thin_provision": true, 00:17:19.553 "num_allocated_clusters": 0, 00:17:19.553 "snapshot": false, 00:17:19.553 "clone": false, 00:17:19.553 "esnap_clone": false 00:17:19.553 } 00:17:19.553 } 00:17:19.553 } 00:17:19.553 ]' 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:19.553 20:34:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 06323d61-1380-4ee3-8c2f-94b29ba84d20 -c nvc0n1p0 --l2p_dram_limit 60 00:17:19.832 [2024-07-12 20:34:13.910259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.832 [2024-07-12 20:34:13.910337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.832 [2024-07-12 20:34:13.910361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:19.832 [2024-07-12 20:34:13.910393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.832 [2024-07-12 20:34:13.910484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.832 [2024-07-12 20:34:13.910523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.832 [2024-07-12 20:34:13.910548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:19.832 [2024-07-12 20:34:13.910567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.832 [2024-07-12 20:34:13.910605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.832 [2024-07-12 20:34:13.910993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.832 [2024-07-12 20:34:13.911021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.832 [2024-07-12 20:34:13.911054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.832 [2024-07-12 20:34:13.911068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:17:19.832 [2024-07-12 20:34:13.911087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.832 [2024-07-12 20:34:13.911282] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f7d8daaa-79ce-4cd3-8c15-f5b55099b4e1 00:17:19.832 [2024-07-12 20:34:13.913275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.832 [2024-07-12 20:34:13.913487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:19.832 [2024-07-12 20:34:13.913594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:19.832 [2024-07-12 20:34:13.913700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-07-12 20:34:13.923810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-07-12 20:34:13.924023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.833 [2024-07-12 20:34:13.924132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.920 ms 00:17:19.833 [2024-07-12 20:34:13.924257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-07-12 20:34:13.924527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-07-12 20:34:13.924632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.833 [2024-07-12 20:34:13.924734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:19.833 [2024-07-12 20:34:13.924819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.833 [2024-07-12 20:34:13.925005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.833 [2024-07-12 20:34:13.925101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.833 [2024-07-12 20:34:13.925207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:19.834 [2024-07-12 20:34:13.925337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.834 [2024-07-12 20:34:13.925477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.834 [2024-07-12 20:34:13.927901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.834 [2024-07-12 20:34:13.928030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.834 [2024-07-12 20:34:13.928112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:17:19.834 [2024-07-12 20:34:13.928196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.834 [2024-07-12 20:34:13.928356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.834 [2024-07-12 20:34:13.928448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.834 [2024-07-12 20:34:13.928554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:19.834 [2024-07-12 20:34:13.928659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.834 [2024-07-12 20:34:13.928785] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:19.834 [2024-07-12 20:34:13.929083] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:19.834 [2024-07-12 20:34:13.929213] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.834 [2024-07-12 20:34:13.929320] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:19.834 [2024-07-12 20:34:13.929409] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.834 [2024-07-12 20:34:13.929491] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.834 [2024-07-12 20:34:13.929588] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:19.834 [2024-07-12 20:34:13.929679] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.834 [2024-07-12 20:34:13.929775] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:19.835 [2024-07-12 20:34:13.929868] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:19.835 [2024-07-12 20:34:13.929946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.835 [2024-07-12 20:34:13.930027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.835 [2024-07-12 20:34:13.930132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:17:19.835 [2024-07-12 20:34:13.930212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.835 [2024-07-12 20:34:13.930455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.835 [2024-07-12 20:34:13.930580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.835 [2024-07-12 20:34:13.930672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:19.835 [2024-07-12 20:34:13.930752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.835 [2024-07-12 20:34:13.930967] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.835 [2024-07-12 20:34:13.931080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.835 [2024-07-12 20:34:13.931159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.835 [2024-07-12 20:34:13.931304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.835 [2024-07-12 20:34:13.931403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.835 [2024-07-12 20:34:13.931495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.835 [2024-07-12 20:34:13.931586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:19.835 [2024-07-12 20:34:13.931676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.835 [2024-07-12 20:34:13.931763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:19.835 [2024-07-12 20:34:13.931855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.835 [2024-07-12 20:34:13.931962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.835 [2024-07-12 20:34:13.932052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:19.835 [2024-07-12 20:34:13.932139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.835 [2024-07-12 20:34:13.932227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.835 [2024-07-12 20:34:13.932330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:19.835 [2024-07-12 20:34:13.932424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.835 [2024-07-12 20:34:13.932504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.840 [2024-07-12 20:34:13.932580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:19.841 [2024-07-12 20:34:13.932653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.841 [2024-07-12 20:34:13.932723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.841 [2024-07-12 20:34:13.932801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:19.841 [2024-07-12 20:34:13.932879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.841 [2024-07-12 20:34:13.932972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.841 [2024-07-12 20:34:13.933061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:19.841 [2024-07-12 20:34:13.933138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.841 [2024-07-12 20:34:13.933221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.841 [2024-07-12 20:34:13.933320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:19.841 [2024-07-12 20:34:13.933412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.841 [2024-07-12 20:34:13.933492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.841 [2024-07-12 20:34:13.933591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:19.841 [2024-07-12 20:34:13.933682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.841 [2024-07-12 20:34:13.933790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.841 [2024-07-12 20:34:13.933887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:19.841 [2024-07-12 20:34:13.933980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.841 [2024-07-12 20:34:13.934069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.841 [2024-07-12 20:34:13.934146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:19.841 [2024-07-12 20:34:13.934233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.841 [2024-07-12 20:34:13.934334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:19.841 [2024-07-12 20:34:13.934411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:19.841 [2024-07-12 20:34:13.934488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.841 [2024-07-12 20:34:13.934569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:19.841 [2024-07-12 20:34:13.934651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:19.841 [2024-07-12 20:34:13.934731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.841 [2024-07-12 20:34:13.934803] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.841 [2024-07-12 20:34:13.934916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.841 [2024-07-12 20:34:13.935001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.841 [2024-07-12 20:34:13.935088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.841 [2024-07-12 20:34:13.935176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.841 [2024-07-12 20:34:13.935293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.841 [2024-07-12 20:34:13.935406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.841 [2024-07-12 20:34:13.935485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.841 [2024-07-12 20:34:13.935584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.841 [2024-07-12 20:34:13.935671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.841 [2024-07-12 20:34:13.935771] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.841 [2024-07-12 20:34:13.935863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.841 [2024-07-12 20:34:13.935960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:19.841 [2024-07-12 20:34:13.936041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:19.841 [2024-07-12 20:34:13.936133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:19.841 [2024-07-12 20:34:13.936223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:19.841 [2024-07-12 20:34:13.936324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:19.841 [2024-07-12 20:34:13.936412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:19.841 [2024-07-12 20:34:13.936503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:19.841 [2024-07-12 20:34:13.936580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:19.841 [2024-07-12 20:34:13.936667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:19.841 [2024-07-12 20:34:13.936758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:19.841 [2024-07-12 20:34:13.936841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:19.841 [2024-07-12 20:34:13.936928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:19.841 [2024-07-12 20:34:13.937017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:19.841 [2024-07-12 20:34:13.937089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:19.841 [2024-07-12 20:34:13.937182] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.841 [2024-07-12 20:34:13.937292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.841 [2024-07-12 20:34:13.937389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.841 [2024-07-12 20:34:13.937464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.841 [2024-07-12 20:34:13.937545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.841 [2024-07-12 20:34:13.937635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.841 [2024-07-12 20:34:13.937730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.841 [2024-07-12 20:34:13.937812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.841 [2024-07-12 20:34:13.937910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.821 ms 00:17:19.841 [2024-07-12 20:34:13.937997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.841 [2024-07-12 20:34:13.938256] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:19.841 [2024-07-12 20:34:13.938365] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:23.120 [2024-07-12 20:34:16.622265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.622730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:23.121 [2024-07-12 20:34:16.622869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2683.989 ms 00:17:23.121 [2024-07-12 20:34:16.622970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.637731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.637912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.121 [2024-07-12 20:34:16.638028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.469 ms 00:17:23.121 [2024-07-12 20:34:16.638121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.638376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.638474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.121 [2024-07-12 20:34:16.638583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:23.121 [2024-07-12 20:34:16.638688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.660922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.661120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.121 [2024-07-12 20:34:16.661242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.055 ms 00:17:23.121 [2024-07-12 20:34:16.661364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.661499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.661596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.121 [2024-07-12 20:34:16.661700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.121 [2024-07-12 20:34:16.661798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.662532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.662623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.121 [2024-07-12 20:34:16.662743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:17:23.121 [2024-07-12 20:34:16.662853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.663132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.663226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.121 [2024-07-12 20:34:16.663336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:23.121 [2024-07-12 20:34:16.663443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.673985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.674156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.121 [2024-07-12 20:34:16.674333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.373 ms 00:17:23.121 [2024-07-12 20:34:16.674473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.684847] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.121 [2024-07-12 20:34:16.707589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.707785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.121 [2024-07-12 20:34:16.707892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.851 ms 00:17:23.121 [2024-07-12 20:34:16.707986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.757048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.757355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:23.121 [2024-07-12 20:34:16.757472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.917 ms 00:17:23.121 [2024-07-12 20:34:16.757570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.757954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.758083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.121 [2024-07-12 20:34:16.758174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:23.121 [2024-07-12 20:34:16.758272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.762256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.762366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:23.121 [2024-07-12 20:34:16.762474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.817 ms 00:17:23.121 [2024-07-12 20:34:16.762583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.765814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.765942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:23.121 [2024-07-12 20:34:16.766121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:17:23.121 [2024-07-12 20:34:16.766221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.766769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.766891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.121 [2024-07-12 20:34:16.766984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:17:23.121 [2024-07-12 20:34:16.767069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.798764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.798983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:23.121 [2024-07-12 20:34:16.799126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.581 ms 00:17:23.121 [2024-07-12 20:34:16.799216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.804334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.804472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:23.121 [2024-07-12 20:34:16.804578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.975 ms 00:17:23.121 [2024-07-12 20:34:16.804657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.808394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.808521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:23.121 [2024-07-12 20:34:16.808612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:17:23.121 [2024-07-12 20:34:16.808699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.812744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.812878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.121 [2024-07-12 20:34:16.812971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.881 ms 00:17:23.121 [2024-07-12 20:34:16.813060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.813268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.813362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.121 [2024-07-12 20:34:16.813444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:23.121 [2024-07-12 20:34:16.813559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.813757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-07-12 20:34:16.813866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.121 [2024-07-12 20:34:16.813955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:23.121 [2024-07-12 20:34:16.814034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-07-12 20:34:16.815625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2904.770 ms, result 0 00:17:23.121 { 00:17:23.121 "name": "ftl0", 00:17:23.121 "uuid": "f7d8daaa-79ce-4cd3-8c15-f5b55099b4e1" 00:17:23.121 } 00:17:23.121 20:34:16 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:23.121 20:34:16 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:17:23.121 20:34:16 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:23.121 20:34:16 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:17:23.121 20:34:16 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:23.121 20:34:16 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:23.121 20:34:16 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:23.121 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:23.380 [ 00:17:23.380 { 00:17:23.380 "name": "ftl0", 00:17:23.380 "aliases": [ 00:17:23.380 "f7d8daaa-79ce-4cd3-8c15-f5b55099b4e1" 00:17:23.380 ], 00:17:23.380 "product_name": "FTL disk", 00:17:23.380 "block_size": 4096, 00:17:23.380 "num_blocks": 20971520, 00:17:23.380 "uuid": "f7d8daaa-79ce-4cd3-8c15-f5b55099b4e1", 00:17:23.380 "assigned_rate_limits": { 00:17:23.380 "rw_ios_per_sec": 0, 00:17:23.380 "rw_mbytes_per_sec": 0, 00:17:23.380 "r_mbytes_per_sec": 0, 00:17:23.380 "w_mbytes_per_sec": 0 00:17:23.380 }, 00:17:23.380 "claimed": false, 00:17:23.380 "zoned": false, 00:17:23.380 "supported_io_types": { 00:17:23.380 "read": true, 00:17:23.380 "write": true, 00:17:23.380 "unmap": true, 00:17:23.380 "flush": true, 00:17:23.380 "reset": false, 00:17:23.380 "nvme_admin": false, 00:17:23.380 "nvme_io": false, 00:17:23.380 "nvme_io_md": false, 00:17:23.380 "write_zeroes": true, 00:17:23.380 "zcopy": false, 00:17:23.380 "get_zone_info": false, 00:17:23.380 "zone_management": false, 00:17:23.380 "zone_append": false, 00:17:23.380 "compare": false, 00:17:23.380 "compare_and_write": false, 00:17:23.380 "abort": false, 00:17:23.380 "seek_hole": false, 00:17:23.380 "seek_data": false, 00:17:23.380 "copy": false, 00:17:23.380 "nvme_iov_md": false 00:17:23.380 }, 00:17:23.380 "driver_specific": { 00:17:23.380 "ftl": { 00:17:23.380 "base_bdev": "06323d61-1380-4ee3-8c2f-94b29ba84d20", 00:17:23.380 "cache": "nvc0n1p0" 00:17:23.380 } 00:17:23.380 } 00:17:23.380 } 00:17:23.380 ] 00:17:23.380 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:17:23.380 20:34:17 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:23.380 20:34:17 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:23.639 20:34:17 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:23.639 20:34:17 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:23.899 [2024-07-12 20:34:17.802089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.802589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.899 [2024-07-12 20:34:17.802728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:23.899 [2024-07-12 20:34:17.802812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.802964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.899 [2024-07-12 20:34:17.803976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.804080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.899 [2024-07-12 20:34:17.804174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:17:23.899 [2024-07-12 20:34:17.804272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.805019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.805124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.899 [2024-07-12 20:34:17.805204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:17:23.899 [2024-07-12 20:34:17.805335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.808614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.808724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.899 [2024-07-12 20:34:17.808807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.181 ms 00:17:23.899 [2024-07-12 20:34:17.808894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.815554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.815666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.899 [2024-07-12 20:34:17.815733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.565 ms 00:17:23.899 [2024-07-12 20:34:17.815811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.817426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.817550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.899 [2024-07-12 20:34:17.817664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:17:23.899 [2024-07-12 20:34:17.817739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.822172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.822326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.899 [2024-07-12 20:34:17.822414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:17:23.899 [2024-07-12 20:34:17.822518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.822811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.822918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.899 [2024-07-12 20:34:17.822997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:17:23.899 [2024-07-12 20:34:17.823081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.824793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.824895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:23.899 [2024-07-12 20:34:17.824985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:17:23.899 [2024-07-12 20:34:17.825051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.826656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.826765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:23.899 [2024-07-12 20:34:17.826864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:17:23.899 [2024-07-12 20:34:17.826947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.828065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.828180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.899 [2024-07-12 20:34:17.828288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:17:23.899 [2024-07-12 20:34:17.828368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.829603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.899 [2024-07-12 20:34:17.829716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.899 [2024-07-12 20:34:17.829804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:17:23.899 [2024-07-12 20:34:17.829887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.899 [2024-07-12 20:34:17.830050] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.899 [2024-07-12 20:34:17.830152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.899 [2024-07-12 20:34:17.830271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.899 [2024-07-12 20:34:17.830363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.899 [2024-07-12 20:34:17.830450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.899 [2024-07-12 20:34:17.830529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.830600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.830678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.830755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.830847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.830943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.831975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.832917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.833931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.834999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.835926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.836983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.837952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.900 [2024-07-12 20:34:17.838718] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.900 [2024-07-12 20:34:17.838795] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7d8daaa-79ce-4cd3-8c15-f5b55099b4e1 00:17:23.900 [2024-07-12 20:34:17.838922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.900 [2024-07-12 20:34:17.839013] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.900 [2024-07-12 20:34:17.839090] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.900 [2024-07-12 20:34:17.839170] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.900 [2024-07-12 20:34:17.839273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.900 [2024-07-12 20:34:17.839365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.900 [2024-07-12 20:34:17.839455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.900 [2024-07-12 20:34:17.839539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.900 [2024-07-12 20:34:17.839624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.900 [2024-07-12 20:34:17.839715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.901 [2024-07-12 20:34:17.839804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.901 [2024-07-12 20:34:17.839893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.667 ms 00:17:23.901 [2024-07-12 20:34:17.839982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.842327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.901 [2024-07-12 20:34:17.842427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.901 [2024-07-12 20:34:17.842512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:17:23.901 [2024-07-12 20:34:17.842591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.842870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.901 [2024-07-12 20:34:17.842975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.901 [2024-07-12 20:34:17.843080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:23.901 [2024-07-12 20:34:17.843159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.851640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.851790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.901 [2024-07-12 20:34:17.851860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.851934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.852094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.852195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.901 [2024-07-12 20:34:17.852333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.852415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.852608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.852704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.901 [2024-07-12 20:34:17.852769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.852836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.852946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.853036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.901 [2024-07-12 20:34:17.853122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.853202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.867046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.867267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.901 [2024-07-12 20:34:17.867373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.867441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.878184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.878369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.901 [2024-07-12 20:34:17.878466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.878547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.878717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.878813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.901 [2024-07-12 20:34:17.878916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.878997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.879186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.879297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.901 [2024-07-12 20:34:17.879389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.879468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.879686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.879780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.901 [2024-07-12 20:34:17.879856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.879918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.880069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.880172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.901 [2024-07-12 20:34:17.880276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.880305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.880389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.880419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.901 [2024-07-12 20:34:17.880434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.880449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.880527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.901 [2024-07-12 20:34:17.880549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.901 [2024-07-12 20:34:17.880563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.901 [2024-07-12 20:34:17.880577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.901 [2024-07-12 20:34:17.880806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.684 ms, result 0 00:17:23.901 true 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 91090 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 91090 ']' 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 91090 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91090 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.901 killing process with pid 91090 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91090' 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 91090 00:17:23.901 20:34:17 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 91090 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:27.208 20:34:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:27.208 20:34:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:27.208 20:34:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:27.208 20:34:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:27.208 20:34:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:27.208 20:34:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:27.208 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:27.208 fio-3.35 00:17:27.208 Starting 1 thread 00:17:32.490 00:17:32.490 test: (groupid=0, jobs=1): err= 0: pid=91280: Fri Jul 12 20:34:26 2024 00:17:32.490 read: IOPS=948, BW=63.0MiB/s (66.0MB/s)(255MiB/4043msec) 00:17:32.490 slat (nsec): min=5673, max=51416, avg=10178.25, stdev=4976.88 00:17:32.490 clat (usec): min=304, max=770, avg=464.64, stdev=54.24 00:17:32.490 lat (usec): min=311, max=777, avg=474.82, stdev=55.43 00:17:32.490 clat percentiles (usec): 00:17:32.490 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 429], 00:17:32.490 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 469], 00:17:32.490 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 562], 00:17:32.490 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 701], 99.95th=[ 734], 00:17:32.490 | 99.99th=[ 775] 00:17:32.490 write: IOPS=954, BW=63.4MiB/s (66.5MB/s)(256MiB/4038msec); 0 zone resets 00:17:32.490 slat (usec): min=19, max=129, avg=29.17, stdev= 9.05 00:17:32.490 clat (usec): min=376, max=904, avg=530.29, stdev=63.73 00:17:32.490 lat (usec): min=398, max=939, avg=559.46, stdev=65.37 00:17:32.490 clat percentiles (usec): 00:17:32.490 | 1.00th=[ 400], 5.00th=[ 433], 10.00th=[ 465], 20.00th=[ 478], 00:17:32.490 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 545], 00:17:32.490 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 603], 95.00th=[ 644], 00:17:32.490 | 99.00th=[ 734], 99.50th=[ 783], 99.90th=[ 873], 99.95th=[ 898], 00:17:32.490 | 99.99th=[ 906] 00:17:32.490 bw ( KiB/s): min=61200, max=69904, per=100.00%, avg=64991.00, stdev=2802.71, samples=8 00:17:32.490 iops : min= 900, max= 1028, avg=955.75, stdev=41.22, samples=8 00:17:32.490 lat (usec) : 500=57.12%, 750=42.55%, 1000=0.33% 00:17:32.490 cpu : usr=99.04%, sys=0.10%, ctx=8, majf=0, minf=1181 00:17:32.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.490 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.490 00:17:32.490 Run status group 0 (all jobs): 00:17:32.490 READ: bw=63.0MiB/s (66.0MB/s), 63.0MiB/s-63.0MiB/s (66.0MB/s-66.0MB/s), io=255MiB (267MB), run=4043-4043msec 00:17:32.490 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=256MiB (269MB), run=4038-4038msec 00:17:32.765 ----------------------------------------------------- 00:17:32.765 Suppressions used: 00:17:32.765 count bytes template 00:17:32.765 1 5 /usr/src/fio/parse.c 00:17:32.765 1 8 libtcmalloc_minimal.so 00:17:32.765 1 904 libcrypto.so 00:17:32.765 ----------------------------------------------------- 00:17:32.765 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:32.765 20:34:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:33.023 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:33.023 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:33.023 fio-3.35 00:17:33.023 Starting 2 threads 00:18:05.126 00:18:05.126 first_half: (groupid=0, jobs=1): err= 0: pid=91362: Fri Jul 12 20:34:56 2024 00:18:05.126 read: IOPS=2263, BW=9055KiB/s (9272kB/s)(255MiB/28820msec) 00:18:05.126 slat (usec): min=4, max=129, avg= 7.77, stdev= 2.38 00:18:05.126 clat (usec): min=841, max=385888, avg=43794.96, stdev=20568.10 00:18:05.126 lat (usec): min=850, max=385895, avg=43802.74, stdev=20568.26 00:18:05.126 clat percentiles (msec): 00:18:05.126 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:18:05.126 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:18:05.126 | 70.00th=[ 42], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 62], 00:18:05.126 | 99.00th=[ 165], 99.50th=[ 182], 99.90th=[ 197], 99.95th=[ 288], 00:18:05.126 | 99.99th=[ 376] 00:18:05.126 write: IOPS=2921, BW=11.4MiB/s (12.0MB/s)(256MiB/22431msec); 0 zone resets 00:18:05.126 slat (usec): min=5, max=1031, avg= 9.75, stdev= 6.37 00:18:05.126 clat (usec): min=465, max=114350, avg=12632.33, stdev=21658.75 00:18:05.126 lat (usec): min=475, max=114360, avg=12642.08, stdev=21658.93 00:18:05.126 clat percentiles (usec): 00:18:05.126 | 1.00th=[ 979], 5.00th=[ 1287], 10.00th=[ 1500], 20.00th=[ 1876], 00:18:05.126 | 30.00th=[ 2737], 40.00th=[ 4686], 50.00th=[ 6259], 60.00th=[ 7308], 00:18:05.126 | 70.00th=[ 8586], 80.00th=[ 14484], 90.00th=[ 21365], 95.00th=[ 82314], 00:18:05.126 | 99.00th=[102237], 99.50th=[106431], 99.90th=[110625], 99.95th=[111674], 00:18:05.126 | 99.99th=[113771] 00:18:05.126 bw ( KiB/s): min= 72, max=41272, per=100.00%, avg=21843.25, stdev=10611.79, samples=24 00:18:05.126 iops : min= 18, max=10318, avg=5460.79, stdev=2652.94, samples=24 00:18:05.126 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.53% 00:18:05.126 lat (msec) : 2=10.80%, 4=7.25%, 10=18.93%, 20=8.00%, 50=47.30% 00:18:05.126 lat (msec) : 100=5.21%, 250=1.89%, 500=0.03% 00:18:05.126 cpu : usr=99.15%, sys=0.18%, ctx=137, majf=0, minf=5579 00:18:05.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:05.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.126 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:05.126 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:05.126 second_half: (groupid=0, jobs=1): err= 0: pid=91363: Fri Jul 12 20:34:56 2024 00:18:05.126 read: IOPS=2247, BW=8988KiB/s (9204kB/s)(255MiB/29039msec) 00:18:05.126 slat (nsec): min=4647, max=37419, avg=7890.53, stdev=2286.27 00:18:05.126 clat (usec): min=875, max=394560, avg=43148.39, stdev=23217.47 00:18:05.126 lat (usec): min=883, max=394569, avg=43156.28, stdev=23217.71 00:18:05.126 clat percentiles (msec): 00:18:05.126 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 39], 00:18:05.126 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 40], 00:18:05.126 | 70.00th=[ 42], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 55], 00:18:05.126 | 99.00th=[ 174], 99.50th=[ 194], 99.90th=[ 264], 99.95th=[ 300], 00:18:05.126 | 99.99th=[ 388] 00:18:05.126 write: IOPS=2633, BW=10.3MiB/s (10.8MB/s)(256MiB/24885msec); 0 zone resets 00:18:05.126 slat (usec): min=5, max=867, avg=10.00, stdev= 6.00 00:18:05.126 clat (usec): min=483, max=114716, avg=13728.27, stdev=22958.71 00:18:05.127 lat (usec): min=493, max=114726, avg=13738.27, stdev=22959.02 00:18:05.127 clat percentiles (usec): 00:18:05.127 | 1.00th=[ 947], 5.00th=[ 1221], 10.00th=[ 1401], 20.00th=[ 1729], 00:18:05.127 | 30.00th=[ 2507], 40.00th=[ 3916], 50.00th=[ 5604], 60.00th=[ 6980], 00:18:05.127 | 70.00th=[ 8717], 80.00th=[ 16188], 90.00th=[ 37487], 95.00th=[ 84411], 00:18:05.127 | 99.00th=[103285], 99.50th=[107480], 99.90th=[111674], 99.95th=[112722], 00:18:05.127 | 99.99th=[113771] 00:18:05.127 bw ( KiB/s): min= 952, max=50720, per=92.17%, avg=19418.07, stdev=12595.46, samples=27 00:18:05.127 iops : min= 238, max=12680, avg=4854.52, stdev=3148.86, samples=27 00:18:05.127 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.73% 00:18:05.127 lat (msec) : 2=12.00%, 4=7.75%, 10=16.19%, 20=8.28%, 50=48.49% 00:18:05.127 lat (msec) : 100=4.29%, 250=2.16%, 500=0.06% 00:18:05.127 cpu : usr=99.18%, sys=0.17%, ctx=52, majf=0, minf=5561 00:18:05.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:05.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.127 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:05.127 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:05.127 00:18:05.127 Run status group 0 (all jobs): 00:18:05.127 READ: bw=17.6MiB/s (18.4MB/s), 8988KiB/s-9055KiB/s (9204kB/s-9272kB/s), io=510MiB (534MB), run=28820-29039msec 00:18:05.127 WRITE: bw=20.6MiB/s (21.6MB/s), 10.3MiB/s-11.4MiB/s (10.8MB/s-12.0MB/s), io=512MiB (537MB), run=22431-24885msec 00:18:05.127 ----------------------------------------------------- 00:18:05.127 Suppressions used: 00:18:05.127 count bytes template 00:18:05.127 2 10 /usr/src/fio/parse.c 00:18:05.127 2 192 /usr/src/fio/iolog.c 00:18:05.127 1 8 libtcmalloc_minimal.so 00:18:05.127 1 904 libcrypto.so 00:18:05.127 ----------------------------------------------------- 00:18:05.127 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:05.127 20:34:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:05.127 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:05.127 fio-3.35 00:18:05.127 Starting 1 thread 00:18:23.352 00:18:23.352 test: (groupid=0, jobs=1): err= 0: pid=91716: Fri Jul 12 20:35:14 2024 00:18:23.352 read: IOPS=6411, BW=25.0MiB/s (26.3MB/s)(255MiB/10169msec) 00:18:23.352 slat (nsec): min=4719, max=40481, avg=6603.36, stdev=1720.12 00:18:23.352 clat (usec): min=791, max=38036, avg=19951.46, stdev=1355.23 00:18:23.352 lat (usec): min=797, max=38042, avg=19958.07, stdev=1355.27 00:18:23.352 clat percentiles (usec): 00:18:23.352 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19268], 20.00th=[19268], 00:18:23.352 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:18:23.352 | 70.00th=[19792], 80.00th=[20055], 90.00th=[21103], 95.00th=[22414], 00:18:23.352 | 99.00th=[26346], 99.50th=[27132], 99.90th=[28443], 99.95th=[33424], 00:18:23.352 | 99.99th=[37487] 00:18:23.352 write: IOPS=11.7k, BW=45.8MiB/s (48.1MB/s)(256MiB/5585msec); 0 zone resets 00:18:23.352 slat (usec): min=5, max=764, avg= 9.24, stdev= 5.88 00:18:23.352 clat (usec): min=647, max=61447, avg=10848.36, stdev=13420.46 00:18:23.352 lat (usec): min=660, max=61456, avg=10857.60, stdev=13420.47 00:18:23.352 clat percentiles (usec): 00:18:23.352 | 1.00th=[ 930], 5.00th=[ 1106], 10.00th=[ 1237], 20.00th=[ 1418], 00:18:23.352 | 30.00th=[ 1598], 40.00th=[ 2040], 50.00th=[ 7308], 60.00th=[ 8455], 00:18:23.352 | 70.00th=[ 9765], 80.00th=[11731], 90.00th=[39060], 95.00th=[41681], 00:18:23.352 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50594], 99.95th=[51643], 00:18:23.352 | 99.99th=[58983] 00:18:23.352 bw ( KiB/s): min= 5952, max=66912, per=93.06%, avg=43682.67, stdev=14662.11, samples=12 00:18:23.352 iops : min= 1488, max=16728, avg=10920.67, stdev=3665.53, samples=12 00:18:23.352 lat (usec) : 750=0.02%, 1000=1.06% 00:18:23.352 lat (msec) : 2=18.84%, 4=1.06%, 10=14.65%, 20=43.87%, 50=20.42% 00:18:23.352 lat (msec) : 100=0.08% 00:18:23.352 cpu : usr=98.88%, sys=0.35%, ctx=41, majf=0, minf=5577 00:18:23.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:23.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.352 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.352 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.352 00:18:23.352 Run status group 0 (all jobs): 00:18:23.352 READ: bw=25.0MiB/s (26.3MB/s), 25.0MiB/s-25.0MiB/s (26.3MB/s-26.3MB/s), io=255MiB (267MB), run=10169-10169msec 00:18:23.352 WRITE: bw=45.8MiB/s (48.1MB/s), 45.8MiB/s-45.8MiB/s (48.1MB/s-48.1MB/s), io=256MiB (268MB), run=5585-5585msec 00:18:23.352 ----------------------------------------------------- 00:18:23.352 Suppressions used: 00:18:23.352 count bytes template 00:18:23.352 1 5 /usr/src/fio/parse.c 00:18:23.352 2 192 /usr/src/fio/iolog.c 00:18:23.352 1 8 libtcmalloc_minimal.so 00:18:23.352 1 904 libcrypto.so 00:18:23.352 ----------------------------------------------------- 00:18:23.352 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:23.352 Remove shared memory files 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid75464 /dev/shm/spdk_tgt_trace.pid90060 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:23.352 00:18:23.352 real 1m6.589s 00:18:23.352 user 2m30.050s 00:18:23.352 sys 0m3.916s 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.352 ************************************ 00:18:23.352 END TEST ftl_fio_basic 00:18:23.352 20:35:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 ************************************ 00:18:23.353 20:35:15 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:23.353 20:35:15 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:23.353 20:35:15 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:23.353 20:35:15 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:23.353 20:35:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 ************************************ 00:18:23.353 START TEST ftl_bdevperf 00:18:23.353 ************************************ 00:18:23.353 20:35:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:23.353 * Looking for test storage... 00:18:23.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=91960 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 91960 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 91960 ']' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.353 20:35:16 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 [2024-07-12 20:35:16.148414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:23.353 [2024-07-12 20:35:16.148628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91960 ] 00:18:23.353 [2024-07-12 20:35:16.302474] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:23.353 [2024-07-12 20:35:16.323070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.353 [2024-07-12 20:35:16.396677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:23.353 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:23.611 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:23.611 { 00:18:23.611 "name": "nvme0n1", 00:18:23.611 "aliases": [ 00:18:23.611 "706b8cc7-bcfa-47b3-bd25-d8aaef95dbd6" 00:18:23.611 ], 00:18:23.611 "product_name": "NVMe disk", 00:18:23.611 "block_size": 4096, 00:18:23.611 "num_blocks": 1310720, 00:18:23.611 "uuid": "706b8cc7-bcfa-47b3-bd25-d8aaef95dbd6", 00:18:23.611 "assigned_rate_limits": { 00:18:23.611 "rw_ios_per_sec": 0, 00:18:23.611 "rw_mbytes_per_sec": 0, 00:18:23.611 "r_mbytes_per_sec": 0, 00:18:23.611 "w_mbytes_per_sec": 0 00:18:23.611 }, 00:18:23.611 "claimed": true, 00:18:23.611 "claim_type": "read_many_write_one", 00:18:23.611 "zoned": false, 00:18:23.611 "supported_io_types": { 00:18:23.611 "read": true, 00:18:23.611 "write": true, 00:18:23.611 "unmap": true, 00:18:23.611 "flush": true, 00:18:23.611 "reset": true, 00:18:23.611 "nvme_admin": true, 00:18:23.611 "nvme_io": true, 00:18:23.611 "nvme_io_md": false, 00:18:23.611 "write_zeroes": true, 00:18:23.611 "zcopy": false, 00:18:23.611 "get_zone_info": false, 00:18:23.611 "zone_management": false, 00:18:23.611 "zone_append": false, 00:18:23.611 "compare": true, 00:18:23.611 "compare_and_write": false, 00:18:23.611 "abort": true, 00:18:23.611 "seek_hole": false, 00:18:23.611 "seek_data": false, 00:18:23.611 "copy": true, 00:18:23.611 "nvme_iov_md": false 00:18:23.611 }, 00:18:23.611 "driver_specific": { 00:18:23.611 "nvme": [ 00:18:23.611 { 00:18:23.611 "pci_address": "0000:00:11.0", 00:18:23.611 "trid": { 00:18:23.611 "trtype": "PCIe", 00:18:23.611 "traddr": "0000:00:11.0" 00:18:23.611 }, 00:18:23.611 "ctrlr_data": { 00:18:23.611 "cntlid": 0, 00:18:23.611 "vendor_id": "0x1b36", 00:18:23.611 "model_number": "QEMU NVMe Ctrl", 00:18:23.611 "serial_number": "12341", 00:18:23.611 "firmware_revision": "8.0.0", 00:18:23.611 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:23.611 "oacs": { 00:18:23.611 "security": 0, 00:18:23.611 "format": 1, 00:18:23.611 "firmware": 0, 00:18:23.611 "ns_manage": 1 00:18:23.611 }, 00:18:23.611 "multi_ctrlr": false, 00:18:23.611 "ana_reporting": false 00:18:23.611 }, 00:18:23.611 "vs": { 00:18:23.611 "nvme_version": "1.4" 00:18:23.611 }, 00:18:23.611 "ns_data": { 00:18:23.611 "id": 1, 00:18:23.611 "can_share": false 00:18:23.611 } 00:18:23.611 } 00:18:23.611 ], 00:18:23.611 "mp_policy": "active_passive" 00:18:23.611 } 00:18:23.611 } 00:18:23.611 ]' 00:18:23.611 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:23.611 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:23.611 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:23.869 20:35:17 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:24.127 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0a0a2572-8e1e-495d-8138-f6334b6eab61 00:18:24.127 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:24.127 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a0a2572-8e1e-495d-8138-f6334b6eab61 00:18:24.384 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:24.642 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f755dcd4-522b-4ef8-ab9b-6c7d20778caa 00:18:24.642 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f755dcd4-522b-4ef8-ab9b-6c7d20778caa 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=ae3b2783-da0a-4703-a792-91041efab059 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ae3b2783-da0a-4703-a792-91041efab059 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ae3b2783-da0a-4703-a792-91041efab059 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ae3b2783-da0a-4703-a792-91041efab059 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ae3b2783-da0a-4703-a792-91041efab059 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:24.900 20:35:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae3b2783-da0a-4703-a792-91041efab059 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:25.158 { 00:18:25.158 "name": "ae3b2783-da0a-4703-a792-91041efab059", 00:18:25.158 "aliases": [ 00:18:25.158 "lvs/nvme0n1p0" 00:18:25.158 ], 00:18:25.158 "product_name": "Logical Volume", 00:18:25.158 "block_size": 4096, 00:18:25.158 "num_blocks": 26476544, 00:18:25.158 "uuid": "ae3b2783-da0a-4703-a792-91041efab059", 00:18:25.158 "assigned_rate_limits": { 00:18:25.158 "rw_ios_per_sec": 0, 00:18:25.158 "rw_mbytes_per_sec": 0, 00:18:25.158 "r_mbytes_per_sec": 0, 00:18:25.158 "w_mbytes_per_sec": 0 00:18:25.158 }, 00:18:25.158 "claimed": false, 00:18:25.158 "zoned": false, 00:18:25.158 "supported_io_types": { 00:18:25.158 "read": true, 00:18:25.158 "write": true, 00:18:25.158 "unmap": true, 00:18:25.158 "flush": false, 00:18:25.158 "reset": true, 00:18:25.158 "nvme_admin": false, 00:18:25.158 "nvme_io": false, 00:18:25.158 "nvme_io_md": false, 00:18:25.158 "write_zeroes": true, 00:18:25.158 "zcopy": false, 00:18:25.158 "get_zone_info": false, 00:18:25.158 "zone_management": false, 00:18:25.158 "zone_append": false, 00:18:25.158 "compare": false, 00:18:25.158 "compare_and_write": false, 00:18:25.158 "abort": false, 00:18:25.158 "seek_hole": true, 00:18:25.158 "seek_data": true, 00:18:25.158 "copy": false, 00:18:25.158 "nvme_iov_md": false 00:18:25.158 }, 00:18:25.158 "driver_specific": { 00:18:25.158 "lvol": { 00:18:25.158 "lvol_store_uuid": "f755dcd4-522b-4ef8-ab9b-6c7d20778caa", 00:18:25.158 "base_bdev": "nvme0n1", 00:18:25.158 "thin_provision": true, 00:18:25.158 "num_allocated_clusters": 0, 00:18:25.158 "snapshot": false, 00:18:25.158 "clone": false, 00:18:25.158 "esnap_clone": false 00:18:25.158 } 00:18:25.158 } 00:18:25.158 } 00:18:25.158 ]' 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:25.158 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ae3b2783-da0a-4703-a792-91041efab059 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ae3b2783-da0a-4703-a792-91041efab059 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae3b2783-da0a-4703-a792-91041efab059 00:18:25.724 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:25.724 { 00:18:25.724 "name": "ae3b2783-da0a-4703-a792-91041efab059", 00:18:25.724 "aliases": [ 00:18:25.724 "lvs/nvme0n1p0" 00:18:25.724 ], 00:18:25.724 "product_name": "Logical Volume", 00:18:25.724 "block_size": 4096, 00:18:25.724 "num_blocks": 26476544, 00:18:25.724 "uuid": "ae3b2783-da0a-4703-a792-91041efab059", 00:18:25.724 "assigned_rate_limits": { 00:18:25.724 "rw_ios_per_sec": 0, 00:18:25.724 "rw_mbytes_per_sec": 0, 00:18:25.724 "r_mbytes_per_sec": 0, 00:18:25.724 "w_mbytes_per_sec": 0 00:18:25.724 }, 00:18:25.724 "claimed": false, 00:18:25.724 "zoned": false, 00:18:25.724 "supported_io_types": { 00:18:25.724 "read": true, 00:18:25.724 "write": true, 00:18:25.724 "unmap": true, 00:18:25.724 "flush": false, 00:18:25.724 "reset": true, 00:18:25.724 "nvme_admin": false, 00:18:25.724 "nvme_io": false, 00:18:25.724 "nvme_io_md": false, 00:18:25.724 "write_zeroes": true, 00:18:25.724 "zcopy": false, 00:18:25.724 "get_zone_info": false, 00:18:25.724 "zone_management": false, 00:18:25.724 "zone_append": false, 00:18:25.724 "compare": false, 00:18:25.724 "compare_and_write": false, 00:18:25.725 "abort": false, 00:18:25.725 "seek_hole": true, 00:18:25.725 "seek_data": true, 00:18:25.725 "copy": false, 00:18:25.725 "nvme_iov_md": false 00:18:25.725 }, 00:18:25.725 "driver_specific": { 00:18:25.725 "lvol": { 00:18:25.725 "lvol_store_uuid": "f755dcd4-522b-4ef8-ab9b-6c7d20778caa", 00:18:25.725 "base_bdev": "nvme0n1", 00:18:25.725 "thin_provision": true, 00:18:25.725 "num_allocated_clusters": 0, 00:18:25.725 "snapshot": false, 00:18:25.725 "clone": false, 00:18:25.725 "esnap_clone": false 00:18:25.725 } 00:18:25.725 } 00:18:25.725 } 00:18:25.725 ]' 00:18:25.725 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:25.983 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:25.983 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:25.983 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:25.983 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:25.983 20:35:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:25.983 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:25.983 20:35:19 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:26.241 20:35:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:26.241 20:35:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size ae3b2783-da0a-4703-a792-91041efab059 00:18:26.241 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ae3b2783-da0a-4703-a792-91041efab059 00:18:26.241 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:26.241 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:26.241 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:26.241 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae3b2783-da0a-4703-a792-91041efab059 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:26.499 { 00:18:26.499 "name": "ae3b2783-da0a-4703-a792-91041efab059", 00:18:26.499 "aliases": [ 00:18:26.499 "lvs/nvme0n1p0" 00:18:26.499 ], 00:18:26.499 "product_name": "Logical Volume", 00:18:26.499 "block_size": 4096, 00:18:26.499 "num_blocks": 26476544, 00:18:26.499 "uuid": "ae3b2783-da0a-4703-a792-91041efab059", 00:18:26.499 "assigned_rate_limits": { 00:18:26.499 "rw_ios_per_sec": 0, 00:18:26.499 "rw_mbytes_per_sec": 0, 00:18:26.499 "r_mbytes_per_sec": 0, 00:18:26.499 "w_mbytes_per_sec": 0 00:18:26.499 }, 00:18:26.499 "claimed": false, 00:18:26.499 "zoned": false, 00:18:26.499 "supported_io_types": { 00:18:26.499 "read": true, 00:18:26.499 "write": true, 00:18:26.499 "unmap": true, 00:18:26.499 "flush": false, 00:18:26.499 "reset": true, 00:18:26.499 "nvme_admin": false, 00:18:26.499 "nvme_io": false, 00:18:26.499 "nvme_io_md": false, 00:18:26.499 "write_zeroes": true, 00:18:26.499 "zcopy": false, 00:18:26.499 "get_zone_info": false, 00:18:26.499 "zone_management": false, 00:18:26.499 "zone_append": false, 00:18:26.499 "compare": false, 00:18:26.499 "compare_and_write": false, 00:18:26.499 "abort": false, 00:18:26.499 "seek_hole": true, 00:18:26.499 "seek_data": true, 00:18:26.499 "copy": false, 00:18:26.499 "nvme_iov_md": false 00:18:26.499 }, 00:18:26.499 "driver_specific": { 00:18:26.499 "lvol": { 00:18:26.499 "lvol_store_uuid": "f755dcd4-522b-4ef8-ab9b-6c7d20778caa", 00:18:26.499 "base_bdev": "nvme0n1", 00:18:26.499 "thin_provision": true, 00:18:26.499 "num_allocated_clusters": 0, 00:18:26.499 "snapshot": false, 00:18:26.499 "clone": false, 00:18:26.499 "esnap_clone": false 00:18:26.499 } 00:18:26.499 } 00:18:26.499 } 00:18:26.499 ]' 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:26.499 20:35:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ae3b2783-da0a-4703-a792-91041efab059 -c nvc0n1p0 --l2p_dram_limit 20 00:18:26.758 [2024-07-12 20:35:20.772127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.772206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:26.758 [2024-07-12 20:35:20.772327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:26.758 [2024-07-12 20:35:20.772344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.772450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.772474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.758 [2024-07-12 20:35:20.772494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:26.758 [2024-07-12 20:35:20.772507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.772537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:26.758 [2024-07-12 20:35:20.772929] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:26.758 [2024-07-12 20:35:20.772966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.772988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.758 [2024-07-12 20:35:20.773004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:18:26.758 [2024-07-12 20:35:20.773016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.773170] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cb76c61c-285c-45f1-b004-928d9ecdbd8a 00:18:26.758 [2024-07-12 20:35:20.774954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.775009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:26.758 [2024-07-12 20:35:20.775036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:26.758 [2024-07-12 20:35:20.775051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.785609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.785874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.758 [2024-07-12 20:35:20.785902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.508 ms 00:18:26.758 [2024-07-12 20:35:20.785922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.786063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.786085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.758 [2024-07-12 20:35:20.786099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:26.758 [2024-07-12 20:35:20.786115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.786204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.786226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:26.758 [2024-07-12 20:35:20.786240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:26.758 [2024-07-12 20:35:20.786268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.786304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:26.758 [2024-07-12 20:35:20.788632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.788682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.758 [2024-07-12 20:35:20.788700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:18:26.758 [2024-07-12 20:35:20.788721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.788778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.788792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:26.758 [2024-07-12 20:35:20.788810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:26.758 [2024-07-12 20:35:20.788821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.788853] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:26.758 [2024-07-12 20:35:20.789028] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:26.758 [2024-07-12 20:35:20.789052] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:26.758 [2024-07-12 20:35:20.789083] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:26.758 [2024-07-12 20:35:20.789101] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:26.758 [2024-07-12 20:35:20.789123] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:26.758 [2024-07-12 20:35:20.789143] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:26.758 [2024-07-12 20:35:20.789170] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:26.758 [2024-07-12 20:35:20.789184] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:26.758 [2024-07-12 20:35:20.789196] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:26.758 [2024-07-12 20:35:20.789210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.758 [2024-07-12 20:35:20.789222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:26.758 [2024-07-12 20:35:20.789237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:18:26.758 [2024-07-12 20:35:20.789251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.758 [2024-07-12 20:35:20.789502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.759 [2024-07-12 20:35:20.789563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:26.759 [2024-07-12 20:35:20.789748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:18:26.759 [2024-07-12 20:35:20.789888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.759 [2024-07-12 20:35:20.790154] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:26.759 [2024-07-12 20:35:20.790295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:26.759 [2024-07-12 20:35:20.790425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:26.759 [2024-07-12 20:35:20.790449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:26.759 [2024-07-12 20:35:20.790479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:26.759 [2024-07-12 20:35:20.790504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:26.759 [2024-07-12 20:35:20.790518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:26.759 [2024-07-12 20:35:20.790541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:26.759 [2024-07-12 20:35:20.790552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:26.759 [2024-07-12 20:35:20.790568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:26.759 [2024-07-12 20:35:20.790589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:26.759 [2024-07-12 20:35:20.790603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:26.759 [2024-07-12 20:35:20.790614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:26.759 [2024-07-12 20:35:20.790638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:26.759 [2024-07-12 20:35:20.790650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:26.759 [2024-07-12 20:35:20.790675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.759 [2024-07-12 20:35:20.790713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:26.759 [2024-07-12 20:35:20.790723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.759 [2024-07-12 20:35:20.790748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:26.759 [2024-07-12 20:35:20.790761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.759 [2024-07-12 20:35:20.790787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:26.759 [2024-07-12 20:35:20.790798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.759 [2024-07-12 20:35:20.790829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:26.759 [2024-07-12 20:35:20.790842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:26.759 [2024-07-12 20:35:20.790866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:26.759 [2024-07-12 20:35:20.790878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:26.759 [2024-07-12 20:35:20.790903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:26.759 [2024-07-12 20:35:20.790933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:26.759 [2024-07-12 20:35:20.790947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:26.759 [2024-07-12 20:35:20.790959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.759 [2024-07-12 20:35:20.790973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:26.759 [2024-07-12 20:35:20.790985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:26.759 [2024-07-12 20:35:20.790998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.759 [2024-07-12 20:35:20.791008] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:26.759 [2024-07-12 20:35:20.791025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:26.759 [2024-07-12 20:35:20.791037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:26.759 [2024-07-12 20:35:20.791051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.759 [2024-07-12 20:35:20.791067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:26.759 [2024-07-12 20:35:20.791082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:26.759 [2024-07-12 20:35:20.791094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:26.759 [2024-07-12 20:35:20.791107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:26.759 [2024-07-12 20:35:20.791118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:26.759 [2024-07-12 20:35:20.791132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:26.759 [2024-07-12 20:35:20.791148] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:26.759 [2024-07-12 20:35:20.791168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:26.759 [2024-07-12 20:35:20.791182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:26.759 [2024-07-12 20:35:20.791197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:26.759 [2024-07-12 20:35:20.791209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:26.759 [2024-07-12 20:35:20.791223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:26.759 [2024-07-12 20:35:20.791235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:26.759 [2024-07-12 20:35:20.791252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:26.759 [2024-07-12 20:35:20.791281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:26.759 [2024-07-12 20:35:20.791301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:26.759 [2024-07-12 20:35:20.791314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:26.759 [2024-07-12 20:35:20.791329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:26.759 [2024-07-12 20:35:20.791342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:26.759 [2024-07-12 20:35:20.791357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:26.759 [2024-07-12 20:35:20.791377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:26.759 [2024-07-12 20:35:20.791391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:26.759 [2024-07-12 20:35:20.791403] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:26.759 [2024-07-12 20:35:20.791419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:26.759 [2024-07-12 20:35:20.791442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:26.759 [2024-07-12 20:35:20.791456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:26.759 [2024-07-12 20:35:20.791468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:26.759 [2024-07-12 20:35:20.791483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:26.759 [2024-07-12 20:35:20.791497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.759 [2024-07-12 20:35:20.791514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:26.759 [2024-07-12 20:35:20.791527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:18:26.759 [2024-07-12 20:35:20.791541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.759 [2024-07-12 20:35:20.791595] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:26.759 [2024-07-12 20:35:20.791626] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:29.322 [2024-07-12 20:35:23.094622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.094709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:29.322 [2024-07-12 20:35:23.094741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2303.023 ms 00:18:29.322 [2024-07-12 20:35:23.094767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.120243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.120543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.322 [2024-07-12 20:35:23.120594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.361 ms 00:18:29.322 [2024-07-12 20:35:23.120616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.120746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.120767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:29.322 [2024-07-12 20:35:23.120792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:29.322 [2024-07-12 20:35:23.120808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.135345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.135408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.322 [2024-07-12 20:35:23.135473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.443 ms 00:18:29.322 [2024-07-12 20:35:23.135492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.135538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.135562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.322 [2024-07-12 20:35:23.135576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.322 [2024-07-12 20:35:23.135590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.136215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.136257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.322 [2024-07-12 20:35:23.136284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:18:29.322 [2024-07-12 20:35:23.136317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.136492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.136517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.322 [2024-07-12 20:35:23.136531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:29.322 [2024-07-12 20:35:23.136545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.144542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.322 [2024-07-12 20:35:23.144600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.322 [2024-07-12 20:35:23.144619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.972 ms 00:18:29.322 [2024-07-12 20:35:23.144634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.322 [2024-07-12 20:35:23.155258] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:29.323 [2024-07-12 20:35:23.163190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.163230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.323 [2024-07-12 20:35:23.163269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.463 ms 00:18:29.323 [2024-07-12 20:35:23.163283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.219500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.219576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:29.323 [2024-07-12 20:35:23.219605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.145 ms 00:18:29.323 [2024-07-12 20:35:23.219619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.219887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.219908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:29.323 [2024-07-12 20:35:23.219924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:29.323 [2024-07-12 20:35:23.219937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.223984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.224154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:29.323 [2024-07-12 20:35:23.224313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.998 ms 00:18:29.323 [2024-07-12 20:35:23.224367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.227508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.227662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:29.323 [2024-07-12 20:35:23.227887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:18:29.323 [2024-07-12 20:35:23.227911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.228396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.228426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:29.323 [2024-07-12 20:35:23.228447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:18:29.323 [2024-07-12 20:35:23.228460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.263254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.263366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:29.323 [2024-07-12 20:35:23.263408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.742 ms 00:18:29.323 [2024-07-12 20:35:23.263426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.268695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.268738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:29.323 [2024-07-12 20:35:23.268761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.203 ms 00:18:29.323 [2024-07-12 20:35:23.268775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.272398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.272453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:29.323 [2024-07-12 20:35:23.272473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.575 ms 00:18:29.323 [2024-07-12 20:35:23.272485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.276404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.276447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:29.323 [2024-07-12 20:35:23.276471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.873 ms 00:18:29.323 [2024-07-12 20:35:23.276483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.276546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.276565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:29.323 [2024-07-12 20:35:23.276581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:29.323 [2024-07-12 20:35:23.276602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.276712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.323 [2024-07-12 20:35:23.276731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:29.323 [2024-07-12 20:35:23.276750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:29.323 [2024-07-12 20:35:23.276762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.323 [2024-07-12 20:35:23.277986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2505.375 ms, result 0 00:18:29.323 { 00:18:29.323 "name": "ftl0", 00:18:29.323 "uuid": "cb76c61c-285c-45f1-b004-928d9ecdbd8a" 00:18:29.323 } 00:18:29.323 20:35:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:29.323 20:35:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:29.323 20:35:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:29.582 20:35:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:29.582 [2024-07-12 20:35:23.704504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:29.582 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:29.582 Zero copy mechanism will not be used. 00:18:29.582 Running I/O for 4 seconds... 00:18:33.765 00:18:33.765 Latency(us) 00:18:33.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.765 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:33.765 ftl0 : 4.00 1826.10 121.26 0.00 0.00 571.55 238.31 893.67 00:18:33.765 =================================================================================================================== 00:18:33.765 Total : 1826.10 121.26 0.00 0.00 571.55 238.31 893.67 00:18:33.765 [2024-07-12 20:35:27.712386] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:33.765 0 00:18:33.765 20:35:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:33.765 [2024-07-12 20:35:27.847403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:33.765 Running I/O for 4 seconds... 00:18:37.945 00:18:37.945 Latency(us) 00:18:37.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.945 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.945 ftl0 : 4.02 7692.72 30.05 0.00 0.00 16598.41 320.23 32172.22 00:18:37.945 =================================================================================================================== 00:18:37.945 Total : 7692.72 30.05 0.00 0.00 16598.41 0.00 32172.22 00:18:37.945 [2024-07-12 20:35:31.872194] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:37.945 0 00:18:37.945 20:35:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:37.945 [2024-07-12 20:35:32.010658] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:37.945 Running I/O for 4 seconds... 00:18:42.158 00:18:42.158 Latency(us) 00:18:42.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.158 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:42.158 Verification LBA range: start 0x0 length 0x1400000 00:18:42.158 ftl0 : 4.01 6381.74 24.93 0.00 0.00 19990.46 359.33 24307.90 00:18:42.158 =================================================================================================================== 00:18:42.158 Total : 6381.74 24.93 0.00 0.00 19990.46 0.00 24307.90 00:18:42.158 [2024-07-12 20:35:36.030560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:42.158 0 00:18:42.158 20:35:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:42.419 [2024-07-12 20:35:36.311206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.311304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:42.419 [2024-07-12 20:35:36.311330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:42.419 [2024-07-12 20:35:36.311343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.311379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:42.419 [2024-07-12 20:35:36.312241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.312284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:42.419 [2024-07-12 20:35:36.312317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:18:42.419 [2024-07-12 20:35:36.312348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.314148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.314231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:42.419 [2024-07-12 20:35:36.314249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.772 ms 00:18:42.419 [2024-07-12 20:35:36.314289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.492657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.492754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:42.419 [2024-07-12 20:35:36.492778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.338 ms 00:18:42.419 [2024-07-12 20:35:36.492794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.499391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.499431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:42.419 [2024-07-12 20:35:36.499458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.550 ms 00:18:42.419 [2024-07-12 20:35:36.499474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.501401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.501448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:42.419 [2024-07-12 20:35:36.501465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:18:42.419 [2024-07-12 20:35:36.501482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.505819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.505904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:42.419 [2024-07-12 20:35:36.505924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.298 ms 00:18:42.419 [2024-07-12 20:35:36.505942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.506085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.506108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:42.419 [2024-07-12 20:35:36.506123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:42.419 [2024-07-12 20:35:36.506141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.508009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.508053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:42.419 [2024-07-12 20:35:36.508069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:18:42.419 [2024-07-12 20:35:36.508083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.509432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.509472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:42.419 [2024-07-12 20:35:36.509488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:18:42.419 [2024-07-12 20:35:36.509502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.510582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.510626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:42.419 [2024-07-12 20:35:36.510642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:18:42.419 [2024-07-12 20:35:36.510658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.511804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.419 [2024-07-12 20:35:36.511856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:42.419 [2024-07-12 20:35:36.511874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:18:42.419 [2024-07-12 20:35:36.511888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.419 [2024-07-12 20:35:36.511926] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:42.419 [2024-07-12 20:35:36.511973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.511997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:42.419 [2024-07-12 20:35:36.512392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.512946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:42.420 [2024-07-12 20:35:36.513487] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:42.420 [2024-07-12 20:35:36.513500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cb76c61c-285c-45f1-b004-928d9ecdbd8a 00:18:42.420 [2024-07-12 20:35:36.513516] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:42.420 [2024-07-12 20:35:36.513527] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:42.420 [2024-07-12 20:35:36.513541] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:42.420 [2024-07-12 20:35:36.513553] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:42.420 [2024-07-12 20:35:36.513576] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:42.420 [2024-07-12 20:35:36.513588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:42.420 [2024-07-12 20:35:36.513616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:42.420 [2024-07-12 20:35:36.513626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:42.420 [2024-07-12 20:35:36.513639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:42.420 [2024-07-12 20:35:36.513651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.420 [2024-07-12 20:35:36.513665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:42.420 [2024-07-12 20:35:36.513678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:18:42.420 [2024-07-12 20:35:36.513692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.420 [2024-07-12 20:35:36.515789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.420 [2024-07-12 20:35:36.515824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:42.420 [2024-07-12 20:35:36.515840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:18:42.420 [2024-07-12 20:35:36.515854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.420 [2024-07-12 20:35:36.516003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.420 [2024-07-12 20:35:36.516024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:42.420 [2024-07-12 20:35:36.516038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:42.420 [2024-07-12 20:35:36.516055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.420 [2024-07-12 20:35:36.523205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.420 [2024-07-12 20:35:36.523266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.420 [2024-07-12 20:35:36.523286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.420 [2024-07-12 20:35:36.523305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.420 [2024-07-12 20:35:36.523379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.420 [2024-07-12 20:35:36.523404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.420 [2024-07-12 20:35:36.523417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.523431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.523503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.523527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.421 [2024-07-12 20:35:36.523540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.523554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.523580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.523597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.421 [2024-07-12 20:35:36.523609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.523626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.536954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.537022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.421 [2024-07-12 20:35:36.537065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.537085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.547113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.547179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.421 [2024-07-12 20:35:36.547198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.547214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.547330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.547372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.421 [2024-07-12 20:35:36.547386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.547401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.547461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.547486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.421 [2024-07-12 20:35:36.547500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.547517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.547626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.547649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.421 [2024-07-12 20:35:36.547662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.547675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.547751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.547777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:42.421 [2024-07-12 20:35:36.547802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.547815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.547877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.547904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.421 [2024-07-12 20:35:36.547918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.547941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.548012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.421 [2024-07-12 20:35:36.548044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.421 [2024-07-12 20:35:36.548057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.421 [2024-07-12 20:35:36.548075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.421 [2024-07-12 20:35:36.548278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 237.011 ms, result 0 00:18:42.421 true 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 91960 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 91960 ']' 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 91960 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91960 00:18:42.680 killing process with pid 91960 00:18:42.680 Received shutdown signal, test time was about 4.000000 seconds 00:18:42.680 00:18:42.680 Latency(us) 00:18:42.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.680 =================================================================================================================== 00:18:42.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91960' 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 91960 00:18:42.680 20:35:36 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 91960 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:45.967 Remove shared memory files 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:45.967 ************************************ 00:18:45.967 END TEST ftl_bdevperf 00:18:45.967 ************************************ 00:18:45.967 00:18:45.967 real 0m23.606s 00:18:45.967 user 0m27.144s 00:18:45.967 sys 0m1.233s 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.967 20:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:45.967 20:35:39 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:45.967 20:35:39 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:45.967 20:35:39 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:45.967 20:35:39 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.968 20:35:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:45.968 ************************************ 00:18:45.968 START TEST ftl_trim 00:18:45.968 ************************************ 00:18:45.968 20:35:39 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:45.968 * Looking for test storage... 00:18:45.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=92307 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 92307 00:18:45.968 20:35:39 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 92307 ']' 00:18:45.968 20:35:39 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:45.968 20:35:39 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.968 20:35:39 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.968 20:35:39 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.968 20:35:39 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.968 20:35:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:45.968 [2024-07-12 20:35:39.807681] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:45.968 [2024-07-12 20:35:39.807838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92307 ] 00:18:45.968 [2024-07-12 20:35:39.951693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:45.968 [2024-07-12 20:35:39.970317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.968 [2024-07-12 20:35:40.062138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.968 [2024-07-12 20:35:40.062222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.968 [2024-07-12 20:35:40.062285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.924 20:35:40 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.924 20:35:40 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:18:46.924 20:35:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:46.924 20:35:40 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:46.924 20:35:40 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:46.924 20:35:40 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:46.924 20:35:40 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:46.924 20:35:40 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:47.182 20:35:41 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:47.182 20:35:41 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:47.182 20:35:41 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:47.182 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:47.182 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:47.182 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:47.182 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:47.182 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:47.441 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:47.441 { 00:18:47.441 "name": "nvme0n1", 00:18:47.441 "aliases": [ 00:18:47.441 "4733f147-0559-43e6-8a26-b45122b496cd" 00:18:47.441 ], 00:18:47.441 "product_name": "NVMe disk", 00:18:47.441 "block_size": 4096, 00:18:47.441 "num_blocks": 1310720, 00:18:47.441 "uuid": "4733f147-0559-43e6-8a26-b45122b496cd", 00:18:47.441 "assigned_rate_limits": { 00:18:47.441 "rw_ios_per_sec": 0, 00:18:47.441 "rw_mbytes_per_sec": 0, 00:18:47.441 "r_mbytes_per_sec": 0, 00:18:47.441 "w_mbytes_per_sec": 0 00:18:47.441 }, 00:18:47.441 "claimed": true, 00:18:47.441 "claim_type": "read_many_write_one", 00:18:47.441 "zoned": false, 00:18:47.441 "supported_io_types": { 00:18:47.441 "read": true, 00:18:47.441 "write": true, 00:18:47.441 "unmap": true, 00:18:47.441 "flush": true, 00:18:47.441 "reset": true, 00:18:47.441 "nvme_admin": true, 00:18:47.441 "nvme_io": true, 00:18:47.441 "nvme_io_md": false, 00:18:47.441 "write_zeroes": true, 00:18:47.441 "zcopy": false, 00:18:47.441 "get_zone_info": false, 00:18:47.441 "zone_management": false, 00:18:47.441 "zone_append": false, 00:18:47.441 "compare": true, 00:18:47.441 "compare_and_write": false, 00:18:47.441 "abort": true, 00:18:47.441 "seek_hole": false, 00:18:47.441 "seek_data": false, 00:18:47.441 "copy": true, 00:18:47.441 "nvme_iov_md": false 00:18:47.441 }, 00:18:47.441 "driver_specific": { 00:18:47.441 "nvme": [ 00:18:47.441 { 00:18:47.441 "pci_address": "0000:00:11.0", 00:18:47.441 "trid": { 00:18:47.441 "trtype": "PCIe", 00:18:47.441 "traddr": "0000:00:11.0" 00:18:47.441 }, 00:18:47.441 "ctrlr_data": { 00:18:47.441 "cntlid": 0, 00:18:47.441 "vendor_id": "0x1b36", 00:18:47.441 "model_number": "QEMU NVMe Ctrl", 00:18:47.441 "serial_number": "12341", 00:18:47.441 "firmware_revision": "8.0.0", 00:18:47.441 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:47.441 "oacs": { 00:18:47.441 "security": 0, 00:18:47.441 "format": 1, 00:18:47.441 "firmware": 0, 00:18:47.441 "ns_manage": 1 00:18:47.441 }, 00:18:47.441 "multi_ctrlr": false, 00:18:47.441 "ana_reporting": false 00:18:47.441 }, 00:18:47.441 "vs": { 00:18:47.441 "nvme_version": "1.4" 00:18:47.441 }, 00:18:47.441 "ns_data": { 00:18:47.441 "id": 1, 00:18:47.441 "can_share": false 00:18:47.441 } 00:18:47.441 } 00:18:47.441 ], 00:18:47.441 "mp_policy": "active_passive" 00:18:47.441 } 00:18:47.441 } 00:18:47.441 ]' 00:18:47.441 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:47.441 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:47.441 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:47.441 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:47.441 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:47.441 20:35:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:47.441 20:35:41 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:47.441 20:35:41 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:47.441 20:35:41 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:47.441 20:35:41 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:47.441 20:35:41 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:47.700 20:35:41 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f755dcd4-522b-4ef8-ab9b-6c7d20778caa 00:18:47.700 20:35:41 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:47.700 20:35:41 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f755dcd4-522b-4ef8-ab9b-6c7d20778caa 00:18:47.958 20:35:42 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:48.217 20:35:42 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1ef124a2-5cd2-4b56-85ed-4f5db32fa749 00:18:48.217 20:35:42 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1ef124a2-5cd2-4b56-85ed-4f5db32fa749 00:18:48.784 20:35:42 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:48.784 20:35:42 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:48.784 20:35:42 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:48.784 20:35:42 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:48.784 20:35:42 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:48.784 20:35:42 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:48.784 20:35:42 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:48.784 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:48.784 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:48.784 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:48.784 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:48.784 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:49.042 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:49.042 { 00:18:49.042 "name": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:49.042 "aliases": [ 00:18:49.042 "lvs/nvme0n1p0" 00:18:49.042 ], 00:18:49.042 "product_name": "Logical Volume", 00:18:49.042 "block_size": 4096, 00:18:49.042 "num_blocks": 26476544, 00:18:49.042 "uuid": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:49.042 "assigned_rate_limits": { 00:18:49.042 "rw_ios_per_sec": 0, 00:18:49.042 "rw_mbytes_per_sec": 0, 00:18:49.042 "r_mbytes_per_sec": 0, 00:18:49.042 "w_mbytes_per_sec": 0 00:18:49.042 }, 00:18:49.042 "claimed": false, 00:18:49.042 "zoned": false, 00:18:49.042 "supported_io_types": { 00:18:49.042 "read": true, 00:18:49.042 "write": true, 00:18:49.042 "unmap": true, 00:18:49.042 "flush": false, 00:18:49.042 "reset": true, 00:18:49.042 "nvme_admin": false, 00:18:49.042 "nvme_io": false, 00:18:49.042 "nvme_io_md": false, 00:18:49.042 "write_zeroes": true, 00:18:49.042 "zcopy": false, 00:18:49.042 "get_zone_info": false, 00:18:49.042 "zone_management": false, 00:18:49.042 "zone_append": false, 00:18:49.042 "compare": false, 00:18:49.042 "compare_and_write": false, 00:18:49.042 "abort": false, 00:18:49.042 "seek_hole": true, 00:18:49.042 "seek_data": true, 00:18:49.042 "copy": false, 00:18:49.042 "nvme_iov_md": false 00:18:49.042 }, 00:18:49.042 "driver_specific": { 00:18:49.042 "lvol": { 00:18:49.042 "lvol_store_uuid": "1ef124a2-5cd2-4b56-85ed-4f5db32fa749", 00:18:49.042 "base_bdev": "nvme0n1", 00:18:49.042 "thin_provision": true, 00:18:49.042 "num_allocated_clusters": 0, 00:18:49.042 "snapshot": false, 00:18:49.042 "clone": false, 00:18:49.042 "esnap_clone": false 00:18:49.042 } 00:18:49.042 } 00:18:49.042 } 00:18:49.042 ]' 00:18:49.042 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:49.042 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:49.042 20:35:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:49.042 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:49.042 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:49.042 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:49.042 20:35:43 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:49.042 20:35:43 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:49.042 20:35:43 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:49.300 20:35:43 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:49.300 20:35:43 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:49.300 20:35:43 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:49.300 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:49.300 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:49.300 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:49.300 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:49.300 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:49.557 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:49.557 { 00:18:49.557 "name": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:49.557 "aliases": [ 00:18:49.557 "lvs/nvme0n1p0" 00:18:49.557 ], 00:18:49.557 "product_name": "Logical Volume", 00:18:49.557 "block_size": 4096, 00:18:49.557 "num_blocks": 26476544, 00:18:49.557 "uuid": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:49.557 "assigned_rate_limits": { 00:18:49.557 "rw_ios_per_sec": 0, 00:18:49.557 "rw_mbytes_per_sec": 0, 00:18:49.557 "r_mbytes_per_sec": 0, 00:18:49.557 "w_mbytes_per_sec": 0 00:18:49.557 }, 00:18:49.557 "claimed": false, 00:18:49.557 "zoned": false, 00:18:49.557 "supported_io_types": { 00:18:49.557 "read": true, 00:18:49.557 "write": true, 00:18:49.557 "unmap": true, 00:18:49.557 "flush": false, 00:18:49.557 "reset": true, 00:18:49.557 "nvme_admin": false, 00:18:49.557 "nvme_io": false, 00:18:49.557 "nvme_io_md": false, 00:18:49.557 "write_zeroes": true, 00:18:49.557 "zcopy": false, 00:18:49.557 "get_zone_info": false, 00:18:49.557 "zone_management": false, 00:18:49.557 "zone_append": false, 00:18:49.557 "compare": false, 00:18:49.557 "compare_and_write": false, 00:18:49.557 "abort": false, 00:18:49.557 "seek_hole": true, 00:18:49.557 "seek_data": true, 00:18:49.557 "copy": false, 00:18:49.558 "nvme_iov_md": false 00:18:49.558 }, 00:18:49.558 "driver_specific": { 00:18:49.558 "lvol": { 00:18:49.558 "lvol_store_uuid": "1ef124a2-5cd2-4b56-85ed-4f5db32fa749", 00:18:49.558 "base_bdev": "nvme0n1", 00:18:49.558 "thin_provision": true, 00:18:49.558 "num_allocated_clusters": 0, 00:18:49.558 "snapshot": false, 00:18:49.558 "clone": false, 00:18:49.558 "esnap_clone": false 00:18:49.558 } 00:18:49.558 } 00:18:49.558 } 00:18:49.558 ]' 00:18:49.558 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:49.558 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:49.558 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:49.816 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:49.816 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:49.816 20:35:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:49.816 20:35:43 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:49.816 20:35:43 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:50.074 20:35:44 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:50.074 20:35:44 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:50.074 20:35:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:50.074 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:50.074 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:50.074 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:50.074 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:50.074 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cada3b5d-57a1-440b-a7e5-f060f5bb9014 00:18:50.332 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:50.332 { 00:18:50.332 "name": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:50.332 "aliases": [ 00:18:50.332 "lvs/nvme0n1p0" 00:18:50.332 ], 00:18:50.332 "product_name": "Logical Volume", 00:18:50.332 "block_size": 4096, 00:18:50.332 "num_blocks": 26476544, 00:18:50.332 "uuid": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:50.332 "assigned_rate_limits": { 00:18:50.332 "rw_ios_per_sec": 0, 00:18:50.332 "rw_mbytes_per_sec": 0, 00:18:50.332 "r_mbytes_per_sec": 0, 00:18:50.332 "w_mbytes_per_sec": 0 00:18:50.332 }, 00:18:50.332 "claimed": false, 00:18:50.332 "zoned": false, 00:18:50.332 "supported_io_types": { 00:18:50.332 "read": true, 00:18:50.332 "write": true, 00:18:50.332 "unmap": true, 00:18:50.332 "flush": false, 00:18:50.332 "reset": true, 00:18:50.332 "nvme_admin": false, 00:18:50.332 "nvme_io": false, 00:18:50.332 "nvme_io_md": false, 00:18:50.332 "write_zeroes": true, 00:18:50.332 "zcopy": false, 00:18:50.332 "get_zone_info": false, 00:18:50.332 "zone_management": false, 00:18:50.332 "zone_append": false, 00:18:50.332 "compare": false, 00:18:50.332 "compare_and_write": false, 00:18:50.332 "abort": false, 00:18:50.332 "seek_hole": true, 00:18:50.332 "seek_data": true, 00:18:50.332 "copy": false, 00:18:50.332 "nvme_iov_md": false 00:18:50.332 }, 00:18:50.332 "driver_specific": { 00:18:50.332 "lvol": { 00:18:50.332 "lvol_store_uuid": "1ef124a2-5cd2-4b56-85ed-4f5db32fa749", 00:18:50.332 "base_bdev": "nvme0n1", 00:18:50.332 "thin_provision": true, 00:18:50.332 "num_allocated_clusters": 0, 00:18:50.332 "snapshot": false, 00:18:50.332 "clone": false, 00:18:50.332 "esnap_clone": false 00:18:50.332 } 00:18:50.332 } 00:18:50.332 } 00:18:50.332 ]' 00:18:50.332 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:50.332 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:50.332 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:50.332 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:50.332 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:50.332 20:35:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:50.332 20:35:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:50.332 20:35:44 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cada3b5d-57a1-440b-a7e5-f060f5bb9014 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:50.591 [2024-07-12 20:35:44.575792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.591 [2024-07-12 20:35:44.575860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:50.591 [2024-07-12 20:35:44.575886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:50.591 [2024-07-12 20:35:44.575902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.591 [2024-07-12 20:35:44.578895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.591 [2024-07-12 20:35:44.578952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:50.592 [2024-07-12 20:35:44.578970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.958 ms 00:18:50.592 [2024-07-12 20:35:44.578992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.579278] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:50.592 [2024-07-12 20:35:44.579606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:50.592 [2024-07-12 20:35:44.579639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.579659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:50.592 [2024-07-12 20:35:44.579674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:18:50.592 [2024-07-12 20:35:44.579688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.579845] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 292a0a4e-5585-47cf-8793-eea72c677626 00:18:50.592 [2024-07-12 20:35:44.581635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.581678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:50.592 [2024-07-12 20:35:44.581699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:50.592 [2024-07-12 20:35:44.581715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.591166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.591231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:50.592 [2024-07-12 20:35:44.591269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.334 ms 00:18:50.592 [2024-07-12 20:35:44.591283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.591551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.591577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:50.592 [2024-07-12 20:35:44.591601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:18:50.592 [2024-07-12 20:35:44.591614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.591671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.591687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:50.592 [2024-07-12 20:35:44.591702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:50.592 [2024-07-12 20:35:44.591714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.591762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:50.592 [2024-07-12 20:35:44.593982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.594028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:50.592 [2024-07-12 20:35:44.594045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:18:50.592 [2024-07-12 20:35:44.594079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.594151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.594171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:50.592 [2024-07-12 20:35:44.594184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:50.592 [2024-07-12 20:35:44.594201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.594253] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:50.592 [2024-07-12 20:35:44.594444] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:50.592 [2024-07-12 20:35:44.594467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:50.592 [2024-07-12 20:35:44.594509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:50.592 [2024-07-12 20:35:44.594534] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:50.592 [2024-07-12 20:35:44.594552] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:50.592 [2024-07-12 20:35:44.594592] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:50.592 [2024-07-12 20:35:44.594608] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:50.592 [2024-07-12 20:35:44.594620] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:50.592 [2024-07-12 20:35:44.594637] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:50.592 [2024-07-12 20:35:44.594650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.594679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:50.592 [2024-07-12 20:35:44.594693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:18:50.592 [2024-07-12 20:35:44.594707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.594819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.592 [2024-07-12 20:35:44.594842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:50.592 [2024-07-12 20:35:44.594855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:50.592 [2024-07-12 20:35:44.594887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.592 [2024-07-12 20:35:44.595037] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:50.592 [2024-07-12 20:35:44.595063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:50.592 [2024-07-12 20:35:44.595091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:50.592 [2024-07-12 20:35:44.595152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:50.592 [2024-07-12 20:35:44.595188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:50.592 [2024-07-12 20:35:44.595212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:50.592 [2024-07-12 20:35:44.595225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:50.592 [2024-07-12 20:35:44.595252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:50.592 [2024-07-12 20:35:44.595272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:50.592 [2024-07-12 20:35:44.595284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:50.592 [2024-07-12 20:35:44.595298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:50.592 [2024-07-12 20:35:44.595324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:50.592 [2024-07-12 20:35:44.595362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:50.592 [2024-07-12 20:35:44.595399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:50.592 [2024-07-12 20:35:44.595435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:50.592 [2024-07-12 20:35:44.595475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:50.592 [2024-07-12 20:35:44.595511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:50.592 [2024-07-12 20:35:44.595536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:50.592 [2024-07-12 20:35:44.595549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:50.592 [2024-07-12 20:35:44.595561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:50.592 [2024-07-12 20:35:44.595574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:50.592 [2024-07-12 20:35:44.595585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:50.592 [2024-07-12 20:35:44.595599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:50.592 [2024-07-12 20:35:44.595623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:50.592 [2024-07-12 20:35:44.595634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595647] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:50.592 [2024-07-12 20:35:44.595660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:50.592 [2024-07-12 20:35:44.595678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.592 [2024-07-12 20:35:44.595705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:50.592 [2024-07-12 20:35:44.595717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:50.592 [2024-07-12 20:35:44.595731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:50.592 [2024-07-12 20:35:44.595742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:50.592 [2024-07-12 20:35:44.595756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:50.592 [2024-07-12 20:35:44.595767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:50.592 [2024-07-12 20:35:44.595785] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:50.592 [2024-07-12 20:35:44.595800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:50.592 [2024-07-12 20:35:44.595816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:50.592 [2024-07-12 20:35:44.595828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:50.592 [2024-07-12 20:35:44.595842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:50.593 [2024-07-12 20:35:44.595855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:50.593 [2024-07-12 20:35:44.595869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:50.593 [2024-07-12 20:35:44.595881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:50.593 [2024-07-12 20:35:44.595898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:50.593 [2024-07-12 20:35:44.595910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:50.593 [2024-07-12 20:35:44.595923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:50.593 [2024-07-12 20:35:44.595935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:50.593 [2024-07-12 20:35:44.595949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:50.593 [2024-07-12 20:35:44.595961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:50.593 [2024-07-12 20:35:44.595975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:50.593 [2024-07-12 20:35:44.595987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:50.593 [2024-07-12 20:35:44.596003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:50.593 [2024-07-12 20:35:44.596019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:50.593 [2024-07-12 20:35:44.596035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:50.593 [2024-07-12 20:35:44.596047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:50.593 [2024-07-12 20:35:44.596061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:50.593 [2024-07-12 20:35:44.596074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:50.593 [2024-07-12 20:35:44.596089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.593 [2024-07-12 20:35:44.596102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:50.593 [2024-07-12 20:35:44.596119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:18:50.593 [2024-07-12 20:35:44.596131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.593 [2024-07-12 20:35:44.596266] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:50.593 [2024-07-12 20:35:44.596306] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:53.144 [2024-07-12 20:35:46.950829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.950905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:53.144 [2024-07-12 20:35:46.950947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2354.551 ms 00:18:53.144 [2024-07-12 20:35:46.950961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:46.965547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.965608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.144 [2024-07-12 20:35:46.965634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.455 ms 00:18:53.144 [2024-07-12 20:35:46.965647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:46.965881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.965900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:53.144 [2024-07-12 20:35:46.965921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:53.144 [2024-07-12 20:35:46.965933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:46.986512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.986571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.144 [2024-07-12 20:35:46.986596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.529 ms 00:18:53.144 [2024-07-12 20:35:46.986609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:46.986752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.986775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.144 [2024-07-12 20:35:46.986792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:53.144 [2024-07-12 20:35:46.986804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:46.987415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.987443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.144 [2024-07-12 20:35:46.987462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:18:53.144 [2024-07-12 20:35:46.987475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:46.987670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.987689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.144 [2024-07-12 20:35:46.987708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:53.144 [2024-07-12 20:35:46.987720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:46.996975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:46.997023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.144 [2024-07-12 20:35:46.997052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.207 ms 00:18:53.144 [2024-07-12 20:35:46.997065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:47.007221] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:53.144 [2024-07-12 20:35:47.028442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:47.028523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:53.144 [2024-07-12 20:35:47.028545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.224 ms 00:18:53.144 [2024-07-12 20:35:47.028563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.144 [2024-07-12 20:35:47.089992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.144 [2024-07-12 20:35:47.090092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:53.144 [2024-07-12 20:35:47.090128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.286 ms 00:18:53.144 [2024-07-12 20:35:47.090153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.090491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.090530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:53.145 [2024-07-12 20:35:47.090548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:18:53.145 [2024-07-12 20:35:47.090581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.094353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.094401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:53.145 [2024-07-12 20:35:47.094419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.729 ms 00:18:53.145 [2024-07-12 20:35:47.094434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.097500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.097547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:53.145 [2024-07-12 20:35:47.097565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:18:53.145 [2024-07-12 20:35:47.097580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.098037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.098080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:53.145 [2024-07-12 20:35:47.098096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:18:53.145 [2024-07-12 20:35:47.098114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.132551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.132630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:53.145 [2024-07-12 20:35:47.132654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.392 ms 00:18:53.145 [2024-07-12 20:35:47.132671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.137726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.137778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:53.145 [2024-07-12 20:35:47.137802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.988 ms 00:18:53.145 [2024-07-12 20:35:47.137817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.141459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.141505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:53.145 [2024-07-12 20:35:47.141523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:18:53.145 [2024-07-12 20:35:47.141537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.145395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.145442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:53.145 [2024-07-12 20:35:47.145461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.801 ms 00:18:53.145 [2024-07-12 20:35:47.145479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.145541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.145563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:53.145 [2024-07-12 20:35:47.145578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:53.145 [2024-07-12 20:35:47.145592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.145687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.145 [2024-07-12 20:35:47.145707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:53.145 [2024-07-12 20:35:47.145720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:53.145 [2024-07-12 20:35:47.145734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.145 [2024-07-12 20:35:47.147118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.145 [2024-07-12 20:35:47.148466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2570.974 ms, result 0 00:18:53.145 [2024-07-12 20:35:47.149410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:53.145 { 00:18:53.145 "name": "ftl0", 00:18:53.145 "uuid": "292a0a4e-5585-47cf-8793-eea72c677626" 00:18:53.145 } 00:18:53.145 20:35:47 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:53.145 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:18:53.145 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:53.145 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:18:53.145 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:53.145 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:53.145 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:53.404 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:53.663 [ 00:18:53.663 { 00:18:53.663 "name": "ftl0", 00:18:53.663 "aliases": [ 00:18:53.663 "292a0a4e-5585-47cf-8793-eea72c677626" 00:18:53.663 ], 00:18:53.663 "product_name": "FTL disk", 00:18:53.663 "block_size": 4096, 00:18:53.663 "num_blocks": 23592960, 00:18:53.663 "uuid": "292a0a4e-5585-47cf-8793-eea72c677626", 00:18:53.663 "assigned_rate_limits": { 00:18:53.663 "rw_ios_per_sec": 0, 00:18:53.663 "rw_mbytes_per_sec": 0, 00:18:53.663 "r_mbytes_per_sec": 0, 00:18:53.663 "w_mbytes_per_sec": 0 00:18:53.663 }, 00:18:53.663 "claimed": false, 00:18:53.663 "zoned": false, 00:18:53.663 "supported_io_types": { 00:18:53.663 "read": true, 00:18:53.663 "write": true, 00:18:53.663 "unmap": true, 00:18:53.663 "flush": true, 00:18:53.663 "reset": false, 00:18:53.663 "nvme_admin": false, 00:18:53.663 "nvme_io": false, 00:18:53.663 "nvme_io_md": false, 00:18:53.663 "write_zeroes": true, 00:18:53.663 "zcopy": false, 00:18:53.663 "get_zone_info": false, 00:18:53.663 "zone_management": false, 00:18:53.663 "zone_append": false, 00:18:53.663 "compare": false, 00:18:53.663 "compare_and_write": false, 00:18:53.663 "abort": false, 00:18:53.663 "seek_hole": false, 00:18:53.663 "seek_data": false, 00:18:53.663 "copy": false, 00:18:53.663 "nvme_iov_md": false 00:18:53.663 }, 00:18:53.663 "driver_specific": { 00:18:53.663 "ftl": { 00:18:53.663 "base_bdev": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:53.663 "cache": "nvc0n1p0" 00:18:53.663 } 00:18:53.663 } 00:18:53.663 } 00:18:53.663 ] 00:18:53.663 20:35:47 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:18:53.663 20:35:47 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:53.663 20:35:47 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:53.922 20:35:48 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:53.922 20:35:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:54.180 20:35:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:54.180 { 00:18:54.180 "name": "ftl0", 00:18:54.180 "aliases": [ 00:18:54.180 "292a0a4e-5585-47cf-8793-eea72c677626" 00:18:54.180 ], 00:18:54.180 "product_name": "FTL disk", 00:18:54.180 "block_size": 4096, 00:18:54.180 "num_blocks": 23592960, 00:18:54.180 "uuid": "292a0a4e-5585-47cf-8793-eea72c677626", 00:18:54.180 "assigned_rate_limits": { 00:18:54.180 "rw_ios_per_sec": 0, 00:18:54.180 "rw_mbytes_per_sec": 0, 00:18:54.180 "r_mbytes_per_sec": 0, 00:18:54.180 "w_mbytes_per_sec": 0 00:18:54.180 }, 00:18:54.180 "claimed": false, 00:18:54.180 "zoned": false, 00:18:54.180 "supported_io_types": { 00:18:54.180 "read": true, 00:18:54.180 "write": true, 00:18:54.180 "unmap": true, 00:18:54.180 "flush": true, 00:18:54.180 "reset": false, 00:18:54.180 "nvme_admin": false, 00:18:54.180 "nvme_io": false, 00:18:54.180 "nvme_io_md": false, 00:18:54.180 "write_zeroes": true, 00:18:54.180 "zcopy": false, 00:18:54.180 "get_zone_info": false, 00:18:54.180 "zone_management": false, 00:18:54.180 "zone_append": false, 00:18:54.180 "compare": false, 00:18:54.180 "compare_and_write": false, 00:18:54.180 "abort": false, 00:18:54.180 "seek_hole": false, 00:18:54.180 "seek_data": false, 00:18:54.180 "copy": false, 00:18:54.180 "nvme_iov_md": false 00:18:54.180 }, 00:18:54.180 "driver_specific": { 00:18:54.180 "ftl": { 00:18:54.180 "base_bdev": "cada3b5d-57a1-440b-a7e5-f060f5bb9014", 00:18:54.180 "cache": "nvc0n1p0" 00:18:54.180 } 00:18:54.180 } 00:18:54.180 } 00:18:54.180 ]' 00:18:54.180 20:35:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:54.438 20:35:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:54.438 20:35:48 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:54.698 [2024-07-12 20:35:48.597125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.698 [2024-07-12 20:35:48.597408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:54.698 [2024-07-12 20:35:48.597546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:54.698 [2024-07-12 20:35:48.597571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.698 [2024-07-12 20:35:48.597638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:54.698 [2024-07-12 20:35:48.598481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.698 [2024-07-12 20:35:48.598516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:54.698 [2024-07-12 20:35:48.598531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:18:54.698 [2024-07-12 20:35:48.598551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.698 [2024-07-12 20:35:48.599121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.698 [2024-07-12 20:35:48.599159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:54.698 [2024-07-12 20:35:48.599175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:18:54.698 [2024-07-12 20:35:48.599190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.698 [2024-07-12 20:35:48.602792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.698 [2024-07-12 20:35:48.602825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:54.698 [2024-07-12 20:35:48.602840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.565 ms 00:18:54.698 [2024-07-12 20:35:48.602854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.698 [2024-07-12 20:35:48.610172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.698 [2024-07-12 20:35:48.610235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:54.698 [2024-07-12 20:35:48.610267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.224 ms 00:18:54.698 [2024-07-12 20:35:48.610286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.698 [2024-07-12 20:35:48.611874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.698 [2024-07-12 20:35:48.611923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:54.698 [2024-07-12 20:35:48.611940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:18:54.698 [2024-07-12 20:35:48.611954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.699 [2024-07-12 20:35:48.616343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.699 [2024-07-12 20:35:48.616392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:54.699 [2024-07-12 20:35:48.616410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.337 ms 00:18:54.699 [2024-07-12 20:35:48.616429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.699 [2024-07-12 20:35:48.616635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.699 [2024-07-12 20:35:48.616662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:54.699 [2024-07-12 20:35:48.616678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:54.699 [2024-07-12 20:35:48.616694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.699 [2024-07-12 20:35:48.618317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.699 [2024-07-12 20:35:48.618358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:54.699 [2024-07-12 20:35:48.618375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:18:54.699 [2024-07-12 20:35:48.618392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.699 [2024-07-12 20:35:48.619868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.699 [2024-07-12 20:35:48.619910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:54.699 [2024-07-12 20:35:48.619926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:18:54.699 [2024-07-12 20:35:48.619940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.699 [2024-07-12 20:35:48.620961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.699 [2024-07-12 20:35:48.621001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:54.699 [2024-07-12 20:35:48.621017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:18:54.699 [2024-07-12 20:35:48.621030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.699 [2024-07-12 20:35:48.622061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.699 [2024-07-12 20:35:48.622103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:54.699 [2024-07-12 20:35:48.622119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:18:54.699 [2024-07-12 20:35:48.622133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.699 [2024-07-12 20:35:48.622184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:54.699 [2024-07-12 20:35:48.622211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.622983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:54.699 [2024-07-12 20:35:48.623359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:54.700 [2024-07-12 20:35:48.623742] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:54.700 [2024-07-12 20:35:48.623754] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 292a0a4e-5585-47cf-8793-eea72c677626 00:18:54.700 [2024-07-12 20:35:48.623769] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:54.700 [2024-07-12 20:35:48.623784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:54.700 [2024-07-12 20:35:48.623797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:54.700 [2024-07-12 20:35:48.623809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:54.700 [2024-07-12 20:35:48.623822] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:54.700 [2024-07-12 20:35:48.623834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:54.700 [2024-07-12 20:35:48.623848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:54.700 [2024-07-12 20:35:48.623858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:54.700 [2024-07-12 20:35:48.623872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:54.700 [2024-07-12 20:35:48.623884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.700 [2024-07-12 20:35:48.623898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:54.700 [2024-07-12 20:35:48.623912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:18:54.700 [2024-07-12 20:35:48.623928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.626219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.700 [2024-07-12 20:35:48.626267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:54.700 [2024-07-12 20:35:48.626285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:18:54.700 [2024-07-12 20:35:48.626300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.626443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.700 [2024-07-12 20:35:48.626461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:54.700 [2024-07-12 20:35:48.626475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:54.700 [2024-07-12 20:35:48.626489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.634731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.634788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:54.700 [2024-07-12 20:35:48.634805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.634839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.634988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.635011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:54.700 [2024-07-12 20:35:48.635026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.635043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.635151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.635179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:54.700 [2024-07-12 20:35:48.635207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.635236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.635301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.635320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:54.700 [2024-07-12 20:35:48.635332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.635346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.649123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.649200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:54.700 [2024-07-12 20:35:48.649221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.649236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.659349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.659413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:54.700 [2024-07-12 20:35:48.659432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.659452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.659566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.659593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.700 [2024-07-12 20:35:48.659607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.659621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.659687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.659705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.700 [2024-07-12 20:35:48.659717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.659732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.659852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.659877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.700 [2024-07-12 20:35:48.659893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.659908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.659978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.660002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:54.700 [2024-07-12 20:35:48.660015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.660032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.660095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.660113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.700 [2024-07-12 20:35:48.660150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.660165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.660287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.700 [2024-07-12 20:35:48.660327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.700 [2024-07-12 20:35:48.660342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.700 [2024-07-12 20:35:48.660356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.700 [2024-07-12 20:35:48.660586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.447 ms, result 0 00:18:54.700 true 00:18:54.700 20:35:48 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 92307 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 92307 ']' 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 92307 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92307 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92307' 00:18:54.700 killing process with pid 92307 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 92307 00:18:54.700 20:35:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 92307 00:18:57.983 20:35:51 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:58.918 65536+0 records in 00:18:58.918 65536+0 records out 00:18:58.918 268435456 bytes (268 MB, 256 MiB) copied, 1.20338 s, 223 MB/s 00:18:58.918 20:35:53 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:59.175 [2024-07-12 20:35:53.100021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:59.175 [2024-07-12 20:35:53.100225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92491 ] 00:18:59.175 [2024-07-12 20:35:53.252918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:59.175 [2024-07-12 20:35:53.271986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.433 [2024-07-12 20:35:53.369034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.433 [2024-07-12 20:35:53.498336] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.433 [2024-07-12 20:35:53.498441] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.693 [2024-07-12 20:35:53.660231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.693 [2024-07-12 20:35:53.660339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:59.693 [2024-07-12 20:35:53.660361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:59.693 [2024-07-12 20:35:53.660385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.693 [2024-07-12 20:35:53.663301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.693 [2024-07-12 20:35:53.663345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.693 [2024-07-12 20:35:53.663374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:18:59.693 [2024-07-12 20:35:53.663386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.693 [2024-07-12 20:35:53.663488] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:59.693 [2024-07-12 20:35:53.663773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:59.693 [2024-07-12 20:35:53.663798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.693 [2024-07-12 20:35:53.663820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.693 [2024-07-12 20:35:53.663833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:18:59.693 [2024-07-12 20:35:53.663844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.693 [2024-07-12 20:35:53.666020] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:59.693 [2024-07-12 20:35:53.669099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.693 [2024-07-12 20:35:53.669302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:59.693 [2024-07-12 20:35:53.669425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.082 ms 00:18:59.693 [2024-07-12 20:35:53.669476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.693 [2024-07-12 20:35:53.669702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.693 [2024-07-12 20:35:53.669861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:59.693 [2024-07-12 20:35:53.670004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:59.693 [2024-07-12 20:35:53.670147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.693 [2024-07-12 20:35:53.678650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.694 [2024-07-12 20:35:53.678819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.694 [2024-07-12 20:35:53.678846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.380 ms 00:18:59.694 [2024-07-12 20:35:53.678861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.694 [2024-07-12 20:35:53.679051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.694 [2024-07-12 20:35:53.679091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.694 [2024-07-12 20:35:53.679113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:59.694 [2024-07-12 20:35:53.679125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.694 [2024-07-12 20:35:53.679170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.694 [2024-07-12 20:35:53.679186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:59.694 [2024-07-12 20:35:53.679198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:59.694 [2024-07-12 20:35:53.679210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.694 [2024-07-12 20:35:53.679262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:59.694 [2024-07-12 20:35:53.681317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.694 [2024-07-12 20:35:53.681353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.694 [2024-07-12 20:35:53.681380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:18:59.694 [2024-07-12 20:35:53.681392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.694 [2024-07-12 20:35:53.681453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.694 [2024-07-12 20:35:53.681470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:59.694 [2024-07-12 20:35:53.681486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:59.694 [2024-07-12 20:35:53.681498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.694 [2024-07-12 20:35:53.681535] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:59.694 [2024-07-12 20:35:53.681565] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:59.694 [2024-07-12 20:35:53.681638] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:59.694 [2024-07-12 20:35:53.681668] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:59.694 [2024-07-12 20:35:53.681782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:59.694 [2024-07-12 20:35:53.681807] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:59.694 [2024-07-12 20:35:53.681822] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:59.694 [2024-07-12 20:35:53.681837] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:59.694 [2024-07-12 20:35:53.681850] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:59.694 [2024-07-12 20:35:53.681863] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:59.694 [2024-07-12 20:35:53.681880] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:59.694 [2024-07-12 20:35:53.681900] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:59.694 [2024-07-12 20:35:53.681911] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:59.694 [2024-07-12 20:35:53.681923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.694 [2024-07-12 20:35:53.681949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:59.694 [2024-07-12 20:35:53.681962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:18:59.694 [2024-07-12 20:35:53.681981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.694 [2024-07-12 20:35:53.682080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.694 [2024-07-12 20:35:53.682096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:59.694 [2024-07-12 20:35:53.682108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:59.694 [2024-07-12 20:35:53.682125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.694 [2024-07-12 20:35:53.682234] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:59.694 [2024-07-12 20:35:53.682284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:59.694 [2024-07-12 20:35:53.682298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:59.694 [2024-07-12 20:35:53.682332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:59.694 [2024-07-12 20:35:53.682367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.694 [2024-07-12 20:35:53.682393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:59.694 [2024-07-12 20:35:53.682404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:59.694 [2024-07-12 20:35:53.682414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.694 [2024-07-12 20:35:53.682435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:59.694 [2024-07-12 20:35:53.682447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:59.694 [2024-07-12 20:35:53.682457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:59.694 [2024-07-12 20:35:53.682479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:59.694 [2024-07-12 20:35:53.682509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:59.694 [2024-07-12 20:35:53.682548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:59.694 [2024-07-12 20:35:53.682587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:59.694 [2024-07-12 20:35:53.682620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:59.694 [2024-07-12 20:35:53.682651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.694 [2024-07-12 20:35:53.682672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:59.694 [2024-07-12 20:35:53.682683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:59.694 [2024-07-12 20:35:53.682693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.694 [2024-07-12 20:35:53.682704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:59.694 [2024-07-12 20:35:53.682714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:59.694 [2024-07-12 20:35:53.682725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:59.694 [2024-07-12 20:35:53.682749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:59.694 [2024-07-12 20:35:53.682760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682770] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:59.694 [2024-07-12 20:35:53.682790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:59.694 [2024-07-12 20:35:53.682802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.694 [2024-07-12 20:35:53.682824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:59.694 [2024-07-12 20:35:53.682835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:59.694 [2024-07-12 20:35:53.682846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:59.694 [2024-07-12 20:35:53.682857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:59.694 [2024-07-12 20:35:53.682867] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:59.694 [2024-07-12 20:35:53.682878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:59.694 [2024-07-12 20:35:53.682890] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:59.694 [2024-07-12 20:35:53.682909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.694 [2024-07-12 20:35:53.682933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:59.694 [2024-07-12 20:35:53.682947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:59.694 [2024-07-12 20:35:53.682962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:59.694 [2024-07-12 20:35:53.682974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:59.694 [2024-07-12 20:35:53.682985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:59.694 [2024-07-12 20:35:53.682997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:59.694 [2024-07-12 20:35:53.683016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:59.694 [2024-07-12 20:35:53.683028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:59.694 [2024-07-12 20:35:53.683040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:59.694 [2024-07-12 20:35:53.683051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:59.694 [2024-07-12 20:35:53.683063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:59.695 [2024-07-12 20:35:53.683085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:59.695 [2024-07-12 20:35:53.683096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:59.695 [2024-07-12 20:35:53.683108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:59.695 [2024-07-12 20:35:53.683119] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:59.695 [2024-07-12 20:35:53.683131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.695 [2024-07-12 20:35:53.683143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:59.695 [2024-07-12 20:35:53.683155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:59.695 [2024-07-12 20:35:53.683169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:59.695 [2024-07-12 20:35:53.683182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:59.695 [2024-07-12 20:35:53.683194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.683206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:59.695 [2024-07-12 20:35:53.683218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:18:59.695 [2024-07-12 20:35:53.683229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.709801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.709880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.695 [2024-07-12 20:35:53.709915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.456 ms 00:18:59.695 [2024-07-12 20:35:53.709933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.710193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.710221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.695 [2024-07-12 20:35:53.710266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:59.695 [2024-07-12 20:35:53.710288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.723609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.723665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.695 [2024-07-12 20:35:53.723692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.273 ms 00:18:59.695 [2024-07-12 20:35:53.723704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.723821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.723841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.695 [2024-07-12 20:35:53.723860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:59.695 [2024-07-12 20:35:53.723873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.724468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.724507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.695 [2024-07-12 20:35:53.724521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:18:59.695 [2024-07-12 20:35:53.724539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.724710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.724735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.695 [2024-07-12 20:35:53.724748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:18:59.695 [2024-07-12 20:35:53.724759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.732808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.732865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.695 [2024-07-12 20:35:53.732883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.017 ms 00:18:59.695 [2024-07-12 20:35:53.732901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.735984] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:59.695 [2024-07-12 20:35:53.736029] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:59.695 [2024-07-12 20:35:53.736048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.736060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:59.695 [2024-07-12 20:35:53.736078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:18:59.695 [2024-07-12 20:35:53.736089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.751889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.751939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:59.695 [2024-07-12 20:35:53.751957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.725 ms 00:18:59.695 [2024-07-12 20:35:53.751969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.754185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.754228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:59.695 [2024-07-12 20:35:53.754257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:18:59.695 [2024-07-12 20:35:53.754271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.755852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.755891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:59.695 [2024-07-12 20:35:53.755907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:18:59.695 [2024-07-12 20:35:53.755928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.756406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.756435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.695 [2024-07-12 20:35:53.756450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:18:59.695 [2024-07-12 20:35:53.756461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.777773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.777849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:59.695 [2024-07-12 20:35:53.777871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.250 ms 00:18:59.695 [2024-07-12 20:35:53.777884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.786438] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:59.695 [2024-07-12 20:35:53.808127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.808200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.695 [2024-07-12 20:35:53.808221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.079 ms 00:18:59.695 [2024-07-12 20:35:53.808233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.808415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.808440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:59.695 [2024-07-12 20:35:53.808455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:59.695 [2024-07-12 20:35:53.808466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.808542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.808559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.695 [2024-07-12 20:35:53.808572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:59.695 [2024-07-12 20:35:53.808583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.808629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.808644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.695 [2024-07-12 20:35:53.808663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:59.695 [2024-07-12 20:35:53.808675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.808715] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:59.695 [2024-07-12 20:35:53.808733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.808746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:59.695 [2024-07-12 20:35:53.808760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:59.695 [2024-07-12 20:35:53.808782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.812956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.813000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.695 [2024-07-12 20:35:53.813018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.142 ms 00:18:59.695 [2024-07-12 20:35:53.813038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.813162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.695 [2024-07-12 20:35:53.813183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.695 [2024-07-12 20:35:53.813196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:59.695 [2024-07-12 20:35:53.813207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.695 [2024-07-12 20:35:53.814516] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.695 [2024-07-12 20:35:53.815721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.958 ms, result 0 00:18:59.695 [2024-07-12 20:35:53.816544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:59.695 [2024-07-12 20:35:53.825848] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.498  Copying: 22/256 [MB] (22 MBps) Copying: 47/256 [MB] (24 MBps) Copying: 71/256 [MB] (24 MBps) Copying: 96/256 [MB] (24 MBps) Copying: 120/256 [MB] (24 MBps) Copying: 144/256 [MB] (24 MBps) Copying: 170/256 [MB] (25 MBps) Copying: 194/256 [MB] (24 MBps) Copying: 218/256 [MB] (23 MBps) Copying: 243/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-12 20:36:04.341697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:10.498 [2024-07-12 20:36:04.343402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.498 [2024-07-12 20:36:04.343450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:10.498 [2024-07-12 20:36:04.343472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.498 [2024-07-12 20:36:04.343484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.498 [2024-07-12 20:36:04.343515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:10.498 [2024-07-12 20:36:04.344337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.498 [2024-07-12 20:36:04.344361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:10.498 [2024-07-12 20:36:04.344376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:19:10.498 [2024-07-12 20:36:04.344387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.498 [2024-07-12 20:36:04.346205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.498 [2024-07-12 20:36:04.346264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:10.498 [2024-07-12 20:36:04.346282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:19:10.498 [2024-07-12 20:36:04.346294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.353010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.353052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:10.499 [2024-07-12 20:36:04.353069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.677 ms 00:19:10.499 [2024-07-12 20:36:04.353081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.360508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.360577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:10.499 [2024-07-12 20:36:04.360619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.382 ms 00:19:10.499 [2024-07-12 20:36:04.360631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.362063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.362104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:10.499 [2024-07-12 20:36:04.362120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:19:10.499 [2024-07-12 20:36:04.362131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.365453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.365494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:10.499 [2024-07-12 20:36:04.365510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:19:10.499 [2024-07-12 20:36:04.365522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.365662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.365682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:10.499 [2024-07-12 20:36:04.365710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:10.499 [2024-07-12 20:36:04.365723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.367772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.367811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:10.499 [2024-07-12 20:36:04.367841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:19:10.499 [2024-07-12 20:36:04.367853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.369336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.369372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:10.499 [2024-07-12 20:36:04.369387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:19:10.499 [2024-07-12 20:36:04.369398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.370507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.370547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:10.499 [2024-07-12 20:36:04.370562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:19:10.499 [2024-07-12 20:36:04.370573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.371678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.499 [2024-07-12 20:36:04.371719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:10.499 [2024-07-12 20:36:04.371734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:19:10.499 [2024-07-12 20:36:04.371744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.499 [2024-07-12 20:36:04.371785] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:10.499 [2024-07-12 20:36:04.371809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.371991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:10.499 [2024-07-12 20:36:04.372404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.372989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.373001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.373013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.373024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:10.500 [2024-07-12 20:36:04.373045] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:10.500 [2024-07-12 20:36:04.373057] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 292a0a4e-5585-47cf-8793-eea72c677626 00:19:10.500 [2024-07-12 20:36:04.373069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:10.500 [2024-07-12 20:36:04.373080] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:10.500 [2024-07-12 20:36:04.373104] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:10.500 [2024-07-12 20:36:04.373116] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:10.500 [2024-07-12 20:36:04.373132] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:10.500 [2024-07-12 20:36:04.373159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:10.500 [2024-07-12 20:36:04.373176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:10.500 [2024-07-12 20:36:04.373186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:10.500 [2024-07-12 20:36:04.373196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:10.500 [2024-07-12 20:36:04.373207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.500 [2024-07-12 20:36:04.373219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:10.500 [2024-07-12 20:36:04.373253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:19:10.500 [2024-07-12 20:36:04.373267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.500 [2024-07-12 20:36:04.375415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.500 [2024-07-12 20:36:04.375454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:10.500 [2024-07-12 20:36:04.375477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:19:10.500 [2024-07-12 20:36:04.375489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.500 [2024-07-12 20:36:04.375619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.500 [2024-07-12 20:36:04.375634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:10.500 [2024-07-12 20:36:04.375646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:10.500 [2024-07-12 20:36:04.375657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.500 [2024-07-12 20:36:04.383027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.500 [2024-07-12 20:36:04.383076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.500 [2024-07-12 20:36:04.383100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.500 [2024-07-12 20:36:04.383113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.383212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.383229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.501 [2024-07-12 20:36:04.383283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.383296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.383355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.383374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.501 [2024-07-12 20:36:04.383387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.383405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.383432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.383445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.501 [2024-07-12 20:36:04.383457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.383468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.396125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.396192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.501 [2024-07-12 20:36:04.396222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.396235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.406639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.406699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.501 [2024-07-12 20:36:04.406733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.406746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.406826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.406844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.501 [2024-07-12 20:36:04.406868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.406889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.406943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.406960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.501 [2024-07-12 20:36:04.406972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.406984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.407079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.407105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.501 [2024-07-12 20:36:04.407117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.407129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.407183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.407201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:10.501 [2024-07-12 20:36:04.407213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.407224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.407289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.407308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.501 [2024-07-12 20:36:04.407320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.407331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.407394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.501 [2024-07-12 20:36:04.407411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.501 [2024-07-12 20:36:04.407423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.501 [2024-07-12 20:36:04.407434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.501 [2024-07-12 20:36:04.407605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.170 ms, result 0 00:19:10.760 00:19:10.760 00:19:10.760 20:36:04 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=92617 00:19:10.760 20:36:04 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:10.760 20:36:04 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 92617 00:19:10.760 20:36:04 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 92617 ']' 00:19:10.760 20:36:04 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.760 20:36:04 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.760 20:36:04 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.760 20:36:04 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.760 20:36:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:11.019 [2024-07-12 20:36:04.948886] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:19:11.019 [2024-07-12 20:36:04.949048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92617 ] 00:19:11.019 [2024-07-12 20:36:05.092182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:11.019 [2024-07-12 20:36:05.115506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.278 [2024-07-12 20:36:05.203763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.846 20:36:05 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.846 20:36:05 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:11.846 20:36:05 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:12.105 [2024-07-12 20:36:06.092674] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.105 [2024-07-12 20:36:06.092788] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.365 [2024-07-12 20:36:06.269954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.270026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.365 [2024-07-12 20:36:06.270048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:12.365 [2024-07-12 20:36:06.270068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.273010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.273065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.365 [2024-07-12 20:36:06.273083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:19:12.365 [2024-07-12 20:36:06.273103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.273375] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.365 [2024-07-12 20:36:06.273745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.365 [2024-07-12 20:36:06.273788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.273824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.365 [2024-07-12 20:36:06.273840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:19:12.365 [2024-07-12 20:36:06.273859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.275920] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:12.365 [2024-07-12 20:36:06.278788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.278831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:12.365 [2024-07-12 20:36:06.278851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:19:12.365 [2024-07-12 20:36:06.278865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.278955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.278975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:12.365 [2024-07-12 20:36:06.279000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:12.365 [2024-07-12 20:36:06.279014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.287557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.287620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.365 [2024-07-12 20:36:06.287639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.470 ms 00:19:12.365 [2024-07-12 20:36:06.287651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.287859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.287890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.365 [2024-07-12 20:36:06.287913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:12.365 [2024-07-12 20:36:06.287925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.287969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.287984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.365 [2024-07-12 20:36:06.287998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:12.365 [2024-07-12 20:36:06.288009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.288046] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:12.365 [2024-07-12 20:36:06.290122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.290177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.365 [2024-07-12 20:36:06.290193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:19:12.365 [2024-07-12 20:36:06.290207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.290270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.290299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.365 [2024-07-12 20:36:06.290313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:12.365 [2024-07-12 20:36:06.290327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.290357] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:12.365 [2024-07-12 20:36:06.290397] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:12.365 [2024-07-12 20:36:06.290451] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:12.365 [2024-07-12 20:36:06.290482] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:12.365 [2024-07-12 20:36:06.290585] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:12.365 [2024-07-12 20:36:06.290604] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.365 [2024-07-12 20:36:06.290619] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:12.365 [2024-07-12 20:36:06.290635] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.365 [2024-07-12 20:36:06.290649] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.365 [2024-07-12 20:36:06.290666] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:12.365 [2024-07-12 20:36:06.290677] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.365 [2024-07-12 20:36:06.290693] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:12.365 [2024-07-12 20:36:06.290704] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:12.365 [2024-07-12 20:36:06.290718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.290729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.365 [2024-07-12 20:36:06.290742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:19:12.365 [2024-07-12 20:36:06.290753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.290849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.365 [2024-07-12 20:36:06.290869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.365 [2024-07-12 20:36:06.290885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:12.365 [2024-07-12 20:36:06.290896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.365 [2024-07-12 20:36:06.291033] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.365 [2024-07-12 20:36:06.291053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.365 [2024-07-12 20:36:06.291068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.365 [2024-07-12 20:36:06.291080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.365 [2024-07-12 20:36:06.291098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.365 [2024-07-12 20:36:06.291109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.365 [2024-07-12 20:36:06.291124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:12.365 [2024-07-12 20:36:06.291135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.366 [2024-07-12 20:36:06.291148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.366 [2024-07-12 20:36:06.291173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.366 [2024-07-12 20:36:06.291184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:12.366 [2024-07-12 20:36:06.291197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.366 [2024-07-12 20:36:06.291208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.366 [2024-07-12 20:36:06.291221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:12.366 [2024-07-12 20:36:06.291231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.366 [2024-07-12 20:36:06.291276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:12.366 [2024-07-12 20:36:06.291289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.366 [2024-07-12 20:36:06.291315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.366 [2024-07-12 20:36:06.291351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.366 [2024-07-12 20:36:06.291362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.366 [2024-07-12 20:36:06.291387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.366 [2024-07-12 20:36:06.291400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.366 [2024-07-12 20:36:06.291423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.366 [2024-07-12 20:36:06.291433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.366 [2024-07-12 20:36:06.291456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.366 [2024-07-12 20:36:06.291469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291479] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.366 [2024-07-12 20:36:06.291491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.366 [2024-07-12 20:36:06.291502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:12.366 [2024-07-12 20:36:06.291517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.366 [2024-07-12 20:36:06.291527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:12.366 [2024-07-12 20:36:06.291541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:12.366 [2024-07-12 20:36:06.291551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:12.366 [2024-07-12 20:36:06.291576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:12.366 [2024-07-12 20:36:06.291588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291598] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.366 [2024-07-12 20:36:06.291613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.366 [2024-07-12 20:36:06.291624] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.366 [2024-07-12 20:36:06.291637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.366 [2024-07-12 20:36:06.291649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.366 [2024-07-12 20:36:06.291667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.366 [2024-07-12 20:36:06.291678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.366 [2024-07-12 20:36:06.291691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.366 [2024-07-12 20:36:06.291701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.366 [2024-07-12 20:36:06.291718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.366 [2024-07-12 20:36:06.291730] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.366 [2024-07-12 20:36:06.291747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.366 [2024-07-12 20:36:06.291763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:12.366 [2024-07-12 20:36:06.291777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:12.366 [2024-07-12 20:36:06.291788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:12.366 [2024-07-12 20:36:06.291802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:12.366 [2024-07-12 20:36:06.291813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:12.366 [2024-07-12 20:36:06.291826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:12.366 [2024-07-12 20:36:06.291837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:12.366 [2024-07-12 20:36:06.291851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:12.366 [2024-07-12 20:36:06.291862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:12.366 [2024-07-12 20:36:06.291876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:12.366 [2024-07-12 20:36:06.291887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:12.366 [2024-07-12 20:36:06.291901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:12.366 [2024-07-12 20:36:06.291912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:12.366 [2024-07-12 20:36:06.291928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:12.366 [2024-07-12 20:36:06.291940] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.366 [2024-07-12 20:36:06.291955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.366 [2024-07-12 20:36:06.291967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.366 [2024-07-12 20:36:06.291981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.366 [2024-07-12 20:36:06.291993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.366 [2024-07-12 20:36:06.292007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.366 [2024-07-12 20:36:06.292019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.292035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.366 [2024-07-12 20:36:06.292047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:19:12.366 [2024-07-12 20:36:06.292070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.307369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.307437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.366 [2024-07-12 20:36:06.307461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.181 ms 00:19:12.366 [2024-07-12 20:36:06.307491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.307675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.307729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.366 [2024-07-12 20:36:06.307746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:12.366 [2024-07-12 20:36:06.307764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.321612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.321674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.366 [2024-07-12 20:36:06.321695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.813 ms 00:19:12.366 [2024-07-12 20:36:06.321724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.321835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.321864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.366 [2024-07-12 20:36:06.321881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.366 [2024-07-12 20:36:06.321917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.322512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.322549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.366 [2024-07-12 20:36:06.322567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:12.366 [2024-07-12 20:36:06.322585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.322773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.322813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.366 [2024-07-12 20:36:06.322829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:12.366 [2024-07-12 20:36:06.322864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.332110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.332161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.366 [2024-07-12 20:36:06.332179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.212 ms 00:19:12.366 [2024-07-12 20:36:06.332204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.335217] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:12.366 [2024-07-12 20:36:06.335275] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:12.366 [2024-07-12 20:36:06.335305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.335352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:12.366 [2024-07-12 20:36:06.335365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:19:12.366 [2024-07-12 20:36:06.335379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.366 [2024-07-12 20:36:06.351289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.366 [2024-07-12 20:36:06.351337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:12.367 [2024-07-12 20:36:06.351355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.854 ms 00:19:12.367 [2024-07-12 20:36:06.351374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.353236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.353296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:12.367 [2024-07-12 20:36:06.353313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:19:12.367 [2024-07-12 20:36:06.353326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.354844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.354886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:12.367 [2024-07-12 20:36:06.354902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:19:12.367 [2024-07-12 20:36:06.354915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.355328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.355361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.367 [2024-07-12 20:36:06.355375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:19:12.367 [2024-07-12 20:36:06.355390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.386741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.386822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:12.367 [2024-07-12 20:36:06.386855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.318 ms 00:19:12.367 [2024-07-12 20:36:06.386883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.395176] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:12.367 [2024-07-12 20:36:06.415424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.415510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.367 [2024-07-12 20:36:06.415536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.380 ms 00:19:12.367 [2024-07-12 20:36:06.415548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.415695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.415718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:12.367 [2024-07-12 20:36:06.415735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:12.367 [2024-07-12 20:36:06.415747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.415824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.415840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.367 [2024-07-12 20:36:06.415856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:12.367 [2024-07-12 20:36:06.415868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.415904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.415918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.367 [2024-07-12 20:36:06.415942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:12.367 [2024-07-12 20:36:06.415954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.416008] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:12.367 [2024-07-12 20:36:06.416024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.416038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:12.367 [2024-07-12 20:36:06.416050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:12.367 [2024-07-12 20:36:06.416063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.420330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.420379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.367 [2024-07-12 20:36:06.420396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:19:12.367 [2024-07-12 20:36:06.420414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.420521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.367 [2024-07-12 20:36:06.420546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.367 [2024-07-12 20:36:06.420562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:12.367 [2024-07-12 20:36:06.420575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.367 [2024-07-12 20:36:06.421771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.367 [2024-07-12 20:36:06.422968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.496 ms, result 0 00:19:12.367 [2024-07-12 20:36:06.423907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.367 Some configs were skipped because the RPC state that can call them passed over. 00:19:12.367 20:36:06 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:12.626 [2024-07-12 20:36:06.673804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.626 [2024-07-12 20:36:06.673873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:12.626 [2024-07-12 20:36:06.673898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:19:12.626 [2024-07-12 20:36:06.673911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.626 [2024-07-12 20:36:06.673961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.725 ms, result 0 00:19:12.626 true 00:19:12.626 20:36:06 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:12.885 [2024-07-12 20:36:06.893695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.885 [2024-07-12 20:36:06.893763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:12.885 [2024-07-12 20:36:06.893784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:19:12.885 [2024-07-12 20:36:06.893799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.885 [2024-07-12 20:36:06.893847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.312 ms, result 0 00:19:12.885 true 00:19:12.885 20:36:06 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 92617 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 92617 ']' 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 92617 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92617 00:19:12.885 killing process with pid 92617 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92617' 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 92617 00:19:12.885 20:36:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 92617 00:19:13.145 [2024-07-12 20:36:07.119769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.119867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:13.145 [2024-07-12 20:36:07.119891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:13.145 [2024-07-12 20:36:07.119903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.119942] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:13.145 [2024-07-12 20:36:07.120757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.120786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:13.145 [2024-07-12 20:36:07.120803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:19:13.145 [2024-07-12 20:36:07.120827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.121140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.121169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:13.145 [2024-07-12 20:36:07.121183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:13.145 [2024-07-12 20:36:07.121196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.125512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.125560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:13.145 [2024-07-12 20:36:07.125586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.290 ms 00:19:13.145 [2024-07-12 20:36:07.125603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.133330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.133390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:13.145 [2024-07-12 20:36:07.133412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.680 ms 00:19:13.145 [2024-07-12 20:36:07.133429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.134804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.134849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:13.145 [2024-07-12 20:36:07.134866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.309 ms 00:19:13.145 [2024-07-12 20:36:07.134880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.138143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.138191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:13.145 [2024-07-12 20:36:07.138208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:19:13.145 [2024-07-12 20:36:07.138227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.138389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.138415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:13.145 [2024-07-12 20:36:07.138428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:13.145 [2024-07-12 20:36:07.138442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.140357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.140397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:13.145 [2024-07-12 20:36:07.140413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.889 ms 00:19:13.145 [2024-07-12 20:36:07.140429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.141869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.141916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:13.145 [2024-07-12 20:36:07.141931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:19:13.145 [2024-07-12 20:36:07.141945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.143133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.143176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:13.145 [2024-07-12 20:36:07.143191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:19:13.145 [2024-07-12 20:36:07.143205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.144366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.145 [2024-07-12 20:36:07.144408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:13.145 [2024-07-12 20:36:07.144423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:19:13.145 [2024-07-12 20:36:07.144436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.145 [2024-07-12 20:36:07.144476] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:13.145 [2024-07-12 20:36:07.144504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:13.145 [2024-07-12 20:36:07.144668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.144991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:13.146 [2024-07-12 20:36:07.145879] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:13.146 [2024-07-12 20:36:07.145891] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 292a0a4e-5585-47cf-8793-eea72c677626 00:19:13.146 [2024-07-12 20:36:07.145909] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:13.146 [2024-07-12 20:36:07.145920] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:13.146 [2024-07-12 20:36:07.145933] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:13.146 [2024-07-12 20:36:07.145944] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:13.147 [2024-07-12 20:36:07.145957] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:13.147 [2024-07-12 20:36:07.145969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:13.147 [2024-07-12 20:36:07.145985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:13.147 [2024-07-12 20:36:07.145995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:13.147 [2024-07-12 20:36:07.146007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:13.147 [2024-07-12 20:36:07.146019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.147 [2024-07-12 20:36:07.146042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:13.147 [2024-07-12 20:36:07.146054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:19:13.147 [2024-07-12 20:36:07.146071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.148249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.147 [2024-07-12 20:36:07.148287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:13.147 [2024-07-12 20:36:07.148302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:19:13.147 [2024-07-12 20:36:07.148327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.148462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.147 [2024-07-12 20:36:07.148480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:13.147 [2024-07-12 20:36:07.148493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:13.147 [2024-07-12 20:36:07.148507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.156626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.156684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.147 [2024-07-12 20:36:07.156702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.156717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.156853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.156875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.147 [2024-07-12 20:36:07.156888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.156906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.156974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.156996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.147 [2024-07-12 20:36:07.157009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.157022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.157049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.157066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.147 [2024-07-12 20:36:07.157078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.157091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.173780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.173856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.147 [2024-07-12 20:36:07.173876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.173891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.183993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.184070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.147 [2024-07-12 20:36:07.184091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.184108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.184218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.184255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.147 [2024-07-12 20:36:07.184272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.184286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.184329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.184346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.147 [2024-07-12 20:36:07.184359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.184374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.184477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.184503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.147 [2024-07-12 20:36:07.184515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.184528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.184581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.184603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:13.147 [2024-07-12 20:36:07.184615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.184631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.184681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.184699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.147 [2024-07-12 20:36:07.184715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.184728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.184786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.147 [2024-07-12 20:36:07.184806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.147 [2024-07-12 20:36:07.184819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.147 [2024-07-12 20:36:07.184832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.147 [2024-07-12 20:36:07.185014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.221 ms, result 0 00:19:13.407 20:36:07 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:13.407 20:36:07 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:13.407 [2024-07-12 20:36:07.544372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:19:13.407 [2024-07-12 20:36:07.544582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92654 ] 00:19:13.671 [2024-07-12 20:36:07.696436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:13.671 [2024-07-12 20:36:07.718872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.672 [2024-07-12 20:36:07.808603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.930 [2024-07-12 20:36:07.931862] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.930 [2024-07-12 20:36:07.931951] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.190 [2024-07-12 20:36:08.092227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.092318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.190 [2024-07-12 20:36:08.092341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:14.190 [2024-07-12 20:36:08.092353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.190 [2024-07-12 20:36:08.095122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.095166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.190 [2024-07-12 20:36:08.095183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:19:14.190 [2024-07-12 20:36:08.095195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.190 [2024-07-12 20:36:08.095339] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.190 [2024-07-12 20:36:08.095729] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.190 [2024-07-12 20:36:08.095766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.095780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.190 [2024-07-12 20:36:08.095793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:19:14.190 [2024-07-12 20:36:08.095805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.190 [2024-07-12 20:36:08.097892] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:14.190 [2024-07-12 20:36:08.100745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.100788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:14.190 [2024-07-12 20:36:08.100805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:19:14.190 [2024-07-12 20:36:08.100832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.190 [2024-07-12 20:36:08.100933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.100954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:14.190 [2024-07-12 20:36:08.100977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:14.190 [2024-07-12 20:36:08.100993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.190 [2024-07-12 20:36:08.109553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.109611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.190 [2024-07-12 20:36:08.109639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.498 ms 00:19:14.190 [2024-07-12 20:36:08.109652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.190 [2024-07-12 20:36:08.109829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.109857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.190 [2024-07-12 20:36:08.109872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:14.190 [2024-07-12 20:36:08.109883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.190 [2024-07-12 20:36:08.109928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.190 [2024-07-12 20:36:08.109944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.190 [2024-07-12 20:36:08.109956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:14.190 [2024-07-12 20:36:08.109979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.191 [2024-07-12 20:36:08.110029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:14.191 [2024-07-12 20:36:08.112102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.191 [2024-07-12 20:36:08.112139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.191 [2024-07-12 20:36:08.112155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:19:14.191 [2024-07-12 20:36:08.112179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.191 [2024-07-12 20:36:08.112232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.191 [2024-07-12 20:36:08.112281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.191 [2024-07-12 20:36:08.112299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:14.191 [2024-07-12 20:36:08.112312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.191 [2024-07-12 20:36:08.112345] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:14.191 [2024-07-12 20:36:08.112374] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:14.191 [2024-07-12 20:36:08.112421] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:14.191 [2024-07-12 20:36:08.112843] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:14.191 [2024-07-12 20:36:08.112955] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:14.191 [2024-07-12 20:36:08.112972] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.191 [2024-07-12 20:36:08.112988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:14.191 [2024-07-12 20:36:08.113015] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113030] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113043] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:14.191 [2024-07-12 20:36:08.113059] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.191 [2024-07-12 20:36:08.113070] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:14.191 [2024-07-12 20:36:08.113081] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:14.191 [2024-07-12 20:36:08.113093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.191 [2024-07-12 20:36:08.113107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.191 [2024-07-12 20:36:08.113120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:19:14.191 [2024-07-12 20:36:08.113130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.191 [2024-07-12 20:36:08.113227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.191 [2024-07-12 20:36:08.113262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.191 [2024-07-12 20:36:08.113277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:14.191 [2024-07-12 20:36:08.113294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.191 [2024-07-12 20:36:08.113404] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.191 [2024-07-12 20:36:08.113420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.191 [2024-07-12 20:36:08.113432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.191 [2024-07-12 20:36:08.113466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.191 [2024-07-12 20:36:08.113500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.191 [2024-07-12 20:36:08.113528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.191 [2024-07-12 20:36:08.113539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:14.191 [2024-07-12 20:36:08.113549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.191 [2024-07-12 20:36:08.113573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.191 [2024-07-12 20:36:08.113585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:14.191 [2024-07-12 20:36:08.113596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.191 [2024-07-12 20:36:08.113617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.191 [2024-07-12 20:36:08.113648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.191 [2024-07-12 20:36:08.113679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.191 [2024-07-12 20:36:08.113722] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.191 [2024-07-12 20:36:08.113754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.191 [2024-07-12 20:36:08.113786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.191 [2024-07-12 20:36:08.113806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.191 [2024-07-12 20:36:08.113818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:14.191 [2024-07-12 20:36:08.113828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.191 [2024-07-12 20:36:08.113839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:14.191 [2024-07-12 20:36:08.113850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:14.191 [2024-07-12 20:36:08.113860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:14.191 [2024-07-12 20:36:08.113884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:14.191 [2024-07-12 20:36:08.113895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113905] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.191 [2024-07-12 20:36:08.113917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.191 [2024-07-12 20:36:08.113939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.191 [2024-07-12 20:36:08.113950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.191 [2024-07-12 20:36:08.113962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.191 [2024-07-12 20:36:08.113973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.191 [2024-07-12 20:36:08.113984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.191 [2024-07-12 20:36:08.113995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.191 [2024-07-12 20:36:08.114005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.191 [2024-07-12 20:36:08.114016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.191 [2024-07-12 20:36:08.114028] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.191 [2024-07-12 20:36:08.114054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.191 [2024-07-12 20:36:08.114068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:14.191 [2024-07-12 20:36:08.114080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:14.191 [2024-07-12 20:36:08.114094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:14.191 [2024-07-12 20:36:08.114108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:14.191 [2024-07-12 20:36:08.114119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:14.191 [2024-07-12 20:36:08.114131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:14.191 [2024-07-12 20:36:08.114142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:14.191 [2024-07-12 20:36:08.114154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:14.191 [2024-07-12 20:36:08.114165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:14.191 [2024-07-12 20:36:08.114176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:14.191 [2024-07-12 20:36:08.114187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:14.191 [2024-07-12 20:36:08.114199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:14.191 [2024-07-12 20:36:08.114210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:14.191 [2024-07-12 20:36:08.114222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:14.191 [2024-07-12 20:36:08.114233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.191 [2024-07-12 20:36:08.114272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.191 [2024-07-12 20:36:08.114285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.191 [2024-07-12 20:36:08.114297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.191 [2024-07-12 20:36:08.114313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.192 [2024-07-12 20:36:08.114326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.192 [2024-07-12 20:36:08.114338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.114359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.192 [2024-07-12 20:36:08.114372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:19:14.192 [2024-07-12 20:36:08.114383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.138423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.138515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.192 [2024-07-12 20:36:08.138542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.936 ms 00:19:14.192 [2024-07-12 20:36:08.138555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.138749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.138769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.192 [2024-07-12 20:36:08.138783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:14.192 [2024-07-12 20:36:08.138810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.151343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.151405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.192 [2024-07-12 20:36:08.151431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.496 ms 00:19:14.192 [2024-07-12 20:36:08.151444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.151568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.151587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.192 [2024-07-12 20:36:08.151606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:14.192 [2024-07-12 20:36:08.151619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.152177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.152205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.192 [2024-07-12 20:36:08.152221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:19:14.192 [2024-07-12 20:36:08.152251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.152427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.152447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.192 [2024-07-12 20:36:08.152459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:14.192 [2024-07-12 20:36:08.152470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.160632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.160681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.192 [2024-07-12 20:36:08.160699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.130 ms 00:19:14.192 [2024-07-12 20:36:08.160718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.163766] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:14.192 [2024-07-12 20:36:08.163842] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:14.192 [2024-07-12 20:36:08.163861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.163874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:14.192 [2024-07-12 20:36:08.163887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:19:14.192 [2024-07-12 20:36:08.163912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.180076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.180143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:14.192 [2024-07-12 20:36:08.180161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.102 ms 00:19:14.192 [2024-07-12 20:36:08.180184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.182807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.182849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:14.192 [2024-07-12 20:36:08.182865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.476 ms 00:19:14.192 [2024-07-12 20:36:08.182877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.184476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.184513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:14.192 [2024-07-12 20:36:08.184528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:19:14.192 [2024-07-12 20:36:08.184549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.185011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.185052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.192 [2024-07-12 20:36:08.185068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:19:14.192 [2024-07-12 20:36:08.185080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.206892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.206977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:14.192 [2024-07-12 20:36:08.206999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.776 ms 00:19:14.192 [2024-07-12 20:36:08.207021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.215652] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:14.192 [2024-07-12 20:36:08.237488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.237564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.192 [2024-07-12 20:36:08.237586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.300 ms 00:19:14.192 [2024-07-12 20:36:08.237598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.237747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.237768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:14.192 [2024-07-12 20:36:08.237783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:14.192 [2024-07-12 20:36:08.237811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.237900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.237917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.192 [2024-07-12 20:36:08.237930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:14.192 [2024-07-12 20:36:08.237955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.237991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.238013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.192 [2024-07-12 20:36:08.238025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:14.192 [2024-07-12 20:36:08.238037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.238079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:14.192 [2024-07-12 20:36:08.238099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.238112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:14.192 [2024-07-12 20:36:08.238126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:14.192 [2024-07-12 20:36:08.238138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.242458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.242502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.192 [2024-07-12 20:36:08.242527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.285 ms 00:19:14.192 [2024-07-12 20:36:08.242539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.242635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.192 [2024-07-12 20:36:08.242655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.192 [2024-07-12 20:36:08.242682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:14.192 [2024-07-12 20:36:08.242693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.192 [2024-07-12 20:36:08.243849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.192 [2024-07-12 20:36:08.245094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.299 ms, result 0 00:19:14.192 [2024-07-12 20:36:08.245963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.192 [2024-07-12 20:36:08.254254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.467  Copying: 28/256 [MB] (28 MBps) Copying: 54/256 [MB] (25 MBps) Copying: 77/256 [MB] (22 MBps) Copying: 100/256 [MB] (23 MBps) Copying: 126/256 [MB] (25 MBps) Copying: 152/256 [MB] (25 MBps) Copying: 178/256 [MB] (26 MBps) Copying: 202/256 [MB] (23 MBps) Copying: 227/256 [MB] (25 MBps) Copying: 252/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-12 20:36:18.442943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.467 [2024-07-12 20:36:18.444635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.444679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:24.467 [2024-07-12 20:36:18.444700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.467 [2024-07-12 20:36:18.444713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.444745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:24.467 [2024-07-12 20:36:18.445561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.445594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:24.467 [2024-07-12 20:36:18.445609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:19:24.467 [2024-07-12 20:36:18.445622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.445921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.445948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:24.467 [2024-07-12 20:36:18.445962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:24.467 [2024-07-12 20:36:18.445973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.449630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.449662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.467 [2024-07-12 20:36:18.449692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:19:24.467 [2024-07-12 20:36:18.449704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.457014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.457069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:24.467 [2024-07-12 20:36:18.457085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.279 ms 00:19:24.467 [2024-07-12 20:36:18.457097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.459092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.459136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:24.467 [2024-07-12 20:36:18.459152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:19:24.467 [2024-07-12 20:36:18.459163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.462834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.462879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:24.467 [2024-07-12 20:36:18.462896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:19:24.467 [2024-07-12 20:36:18.462909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.463071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.463095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:24.467 [2024-07-12 20:36:18.463109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:24.467 [2024-07-12 20:36:18.463121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.464971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.465024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:24.467 [2024-07-12 20:36:18.465039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:19:24.467 [2024-07-12 20:36:18.465050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.466585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.466622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:24.467 [2024-07-12 20:36:18.466638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:19:24.467 [2024-07-12 20:36:18.466648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.467868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.467908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:24.467 [2024-07-12 20:36:18.467923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:19:24.467 [2024-07-12 20:36:18.467934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.468981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.467 [2024-07-12 20:36:18.469019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:24.467 [2024-07-12 20:36:18.469035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:19:24.467 [2024-07-12 20:36:18.469047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.467 [2024-07-12 20:36:18.469086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:24.467 [2024-07-12 20:36:18.469110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:24.467 [2024-07-12 20:36:18.469222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.469995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:24.468 [2024-07-12 20:36:18.470343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:24.468 [2024-07-12 20:36:18.470355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 292a0a4e-5585-47cf-8793-eea72c677626 00:19:24.468 [2024-07-12 20:36:18.470368] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:24.468 [2024-07-12 20:36:18.470379] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:24.468 [2024-07-12 20:36:18.470390] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:24.468 [2024-07-12 20:36:18.470409] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:24.468 [2024-07-12 20:36:18.470420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:24.468 [2024-07-12 20:36:18.470437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:24.468 [2024-07-12 20:36:18.470459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:24.468 [2024-07-12 20:36:18.470470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:24.468 [2024-07-12 20:36:18.470481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:24.468 [2024-07-12 20:36:18.470492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.468 [2024-07-12 20:36:18.470515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:24.468 [2024-07-12 20:36:18.470528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:19:24.468 [2024-07-12 20:36:18.470539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.468 [2024-07-12 20:36:18.472662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.468 [2024-07-12 20:36:18.472702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:24.468 [2024-07-12 20:36:18.472717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:19:24.468 [2024-07-12 20:36:18.472729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.468 [2024-07-12 20:36:18.472861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.468 [2024-07-12 20:36:18.472887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:24.468 [2024-07-12 20:36:18.472900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:24.468 [2024-07-12 20:36:18.472911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.468 [2024-07-12 20:36:18.480166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.468 [2024-07-12 20:36:18.480235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.468 [2024-07-12 20:36:18.480301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.468 [2024-07-12 20:36:18.480314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.480434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.480451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.469 [2024-07-12 20:36:18.480463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.480474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.480537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.480568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.469 [2024-07-12 20:36:18.480586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.480597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.480623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.480636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.469 [2024-07-12 20:36:18.480648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.480659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.496623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.496707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.469 [2024-07-12 20:36:18.496726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.496739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.506830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.506903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.469 [2024-07-12 20:36:18.506922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.506935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.507029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.507047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.469 [2024-07-12 20:36:18.507059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.507080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.507119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.507133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.469 [2024-07-12 20:36:18.507145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.507156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.507277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.507297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.469 [2024-07-12 20:36:18.507310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.507321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.507380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.507398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:24.469 [2024-07-12 20:36:18.507411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.507422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.507483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.507508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.469 [2024-07-12 20:36:18.507521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.507532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.507607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.469 [2024-07-12 20:36:18.507631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.469 [2024-07-12 20:36:18.507645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.469 [2024-07-12 20:36:18.507656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.469 [2024-07-12 20:36:18.507827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.157 ms, result 0 00:19:24.727 00:19:24.727 00:19:24.727 20:36:18 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:24.727 20:36:18 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:25.294 20:36:19 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.552 [2024-07-12 20:36:19.448853] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:19:25.552 [2024-07-12 20:36:19.449072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92780 ] 00:19:25.552 [2024-07-12 20:36:19.601885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:25.552 [2024-07-12 20:36:19.619219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.811 [2024-07-12 20:36:19.701934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.811 [2024-07-12 20:36:19.827597] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.811 [2024-07-12 20:36:19.827728] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:26.071 [2024-07-12 20:36:19.988989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.071 [2024-07-12 20:36:19.989061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:26.071 [2024-07-12 20:36:19.989082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:26.071 [2024-07-12 20:36:19.989095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.071 [2024-07-12 20:36:19.992217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:19.992285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:26.072 [2024-07-12 20:36:19.992304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.092 ms 00:19:26.072 [2024-07-12 20:36:19.992317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:19.992449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:26.072 [2024-07-12 20:36:19.992773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:26.072 [2024-07-12 20:36:19.992809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:19.992822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:26.072 [2024-07-12 20:36:19.992836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:19:26.072 [2024-07-12 20:36:19.992847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:19.995078] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:26.072 [2024-07-12 20:36:19.998083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:19.998127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:26.072 [2024-07-12 20:36:19.998143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:19:26.072 [2024-07-12 20:36:19.998156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:19.998276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:19.998311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:26.072 [2024-07-12 20:36:19.998339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:26.072 [2024-07-12 20:36:19.998361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.007188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:20.007318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:26.072 [2024-07-12 20:36:20.007353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.730 ms 00:19:26.072 [2024-07-12 20:36:20.007366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.007558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:20.007588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:26.072 [2024-07-12 20:36:20.007602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:26.072 [2024-07-12 20:36:20.007614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.007690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:20.007707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:26.072 [2024-07-12 20:36:20.007730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:26.072 [2024-07-12 20:36:20.007742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.007790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:26.072 [2024-07-12 20:36:20.010081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:20.010129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:26.072 [2024-07-12 20:36:20.010146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:19:26.072 [2024-07-12 20:36:20.010172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.010231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:20.010264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:26.072 [2024-07-12 20:36:20.010282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:26.072 [2024-07-12 20:36:20.010304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.010347] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:26.072 [2024-07-12 20:36:20.010377] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:26.072 [2024-07-12 20:36:20.010425] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:26.072 [2024-07-12 20:36:20.010457] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:26.072 [2024-07-12 20:36:20.010571] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:26.072 [2024-07-12 20:36:20.010587] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:26.072 [2024-07-12 20:36:20.010602] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:26.072 [2024-07-12 20:36:20.010626] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:26.072 [2024-07-12 20:36:20.010640] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:26.072 [2024-07-12 20:36:20.010653] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:26.072 [2024-07-12 20:36:20.010669] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:26.072 [2024-07-12 20:36:20.010688] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:26.072 [2024-07-12 20:36:20.010699] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:26.072 [2024-07-12 20:36:20.010711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:20.010728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:26.072 [2024-07-12 20:36:20.010741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:19:26.072 [2024-07-12 20:36:20.010752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.010858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.072 [2024-07-12 20:36:20.010872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:26.072 [2024-07-12 20:36:20.010884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:26.072 [2024-07-12 20:36:20.010899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.072 [2024-07-12 20:36:20.011036] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:26.072 [2024-07-12 20:36:20.011055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:26.072 [2024-07-12 20:36:20.011076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:26.072 [2024-07-12 20:36:20.011125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:26.072 [2024-07-12 20:36:20.011160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.072 [2024-07-12 20:36:20.011187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:26.072 [2024-07-12 20:36:20.011198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:26.072 [2024-07-12 20:36:20.011208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.072 [2024-07-12 20:36:20.011230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:26.072 [2024-07-12 20:36:20.011257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:26.072 [2024-07-12 20:36:20.011270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:26.072 [2024-07-12 20:36:20.011291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:26.072 [2024-07-12 20:36:20.011322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:26.072 [2024-07-12 20:36:20.011353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:26.072 [2024-07-12 20:36:20.011393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:26.072 [2024-07-12 20:36:20.011423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:26.072 [2024-07-12 20:36:20.011454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.072 [2024-07-12 20:36:20.011474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:26.072 [2024-07-12 20:36:20.011484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:26.072 [2024-07-12 20:36:20.011494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.072 [2024-07-12 20:36:20.011504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:26.072 [2024-07-12 20:36:20.011515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:26.072 [2024-07-12 20:36:20.011525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:26.072 [2024-07-12 20:36:20.011550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:26.072 [2024-07-12 20:36:20.011562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011572] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:26.072 [2024-07-12 20:36:20.011583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:26.072 [2024-07-12 20:36:20.011594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.072 [2024-07-12 20:36:20.011631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:26.072 [2024-07-12 20:36:20.011641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:26.072 [2024-07-12 20:36:20.011651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:26.072 [2024-07-12 20:36:20.011661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:26.072 [2024-07-12 20:36:20.011671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:26.072 [2024-07-12 20:36:20.011682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:26.072 [2024-07-12 20:36:20.011693] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:26.072 [2024-07-12 20:36:20.011725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.072 [2024-07-12 20:36:20.011738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:26.072 [2024-07-12 20:36:20.011749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:26.072 [2024-07-12 20:36:20.011763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:26.072 [2024-07-12 20:36:20.011774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:26.072 [2024-07-12 20:36:20.011784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:26.072 [2024-07-12 20:36:20.011795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:26.072 [2024-07-12 20:36:20.011805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:26.072 [2024-07-12 20:36:20.011816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:26.072 [2024-07-12 20:36:20.011826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:26.072 [2024-07-12 20:36:20.011836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:26.072 [2024-07-12 20:36:20.011847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:26.072 [2024-07-12 20:36:20.011857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:26.072 [2024-07-12 20:36:20.011867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:26.072 [2024-07-12 20:36:20.011878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:26.072 [2024-07-12 20:36:20.011888] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:26.073 [2024-07-12 20:36:20.011900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.073 [2024-07-12 20:36:20.011912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:26.073 [2024-07-12 20:36:20.011922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:26.073 [2024-07-12 20:36:20.011937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:26.073 [2024-07-12 20:36:20.011948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:26.073 [2024-07-12 20:36:20.011960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.011970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:26.073 [2024-07-12 20:36:20.011982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:19:26.073 [2024-07-12 20:36:20.012008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.036609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.036698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.073 [2024-07-12 20:36:20.036743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.501 ms 00:19:26.073 [2024-07-12 20:36:20.036755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.036952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.036989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.073 [2024-07-12 20:36:20.037015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:26.073 [2024-07-12 20:36:20.037027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.049556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.049617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.073 [2024-07-12 20:36:20.049656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.494 ms 00:19:26.073 [2024-07-12 20:36:20.049668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.049776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.049795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.073 [2024-07-12 20:36:20.049813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.073 [2024-07-12 20:36:20.049834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.050449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.050492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.073 [2024-07-12 20:36:20.050520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:19:26.073 [2024-07-12 20:36:20.050552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.050751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.050804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.073 [2024-07-12 20:36:20.050844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:19:26.073 [2024-07-12 20:36:20.050856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.059332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.059373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.073 [2024-07-12 20:36:20.059390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.436 ms 00:19:26.073 [2024-07-12 20:36:20.059409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.062813] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:26.073 [2024-07-12 20:36:20.062870] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:26.073 [2024-07-12 20:36:20.062915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.062927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:26.073 [2024-07-12 20:36:20.062939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:19:26.073 [2024-07-12 20:36:20.062991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.080066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.080124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:26.073 [2024-07-12 20:36:20.080143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.017 ms 00:19:26.073 [2024-07-12 20:36:20.080163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.082262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.082332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:26.073 [2024-07-12 20:36:20.082348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.975 ms 00:19:26.073 [2024-07-12 20:36:20.082360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.083970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.084009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:26.073 [2024-07-12 20:36:20.084025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:19:26.073 [2024-07-12 20:36:20.084045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.084492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.084523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.073 [2024-07-12 20:36:20.084537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:19:26.073 [2024-07-12 20:36:20.084549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.108366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.108499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:26.073 [2024-07-12 20:36:20.108537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.754 ms 00:19:26.073 [2024-07-12 20:36:20.108557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.116716] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:26.073 [2024-07-12 20:36:20.137164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.137265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.073 [2024-07-12 20:36:20.137286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.432 ms 00:19:26.073 [2024-07-12 20:36:20.137298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.137439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.137458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:26.073 [2024-07-12 20:36:20.137471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:26.073 [2024-07-12 20:36:20.137494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.137584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.137601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.073 [2024-07-12 20:36:20.137613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:26.073 [2024-07-12 20:36:20.137624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.137658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.137679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.073 [2024-07-12 20:36:20.137700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:26.073 [2024-07-12 20:36:20.137712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.137756] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:26.073 [2024-07-12 20:36:20.137774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.137786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:26.073 [2024-07-12 20:36:20.137799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:26.073 [2024-07-12 20:36:20.137811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.142244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.142325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.073 [2024-07-12 20:36:20.142365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.405 ms 00:19:26.073 [2024-07-12 20:36:20.142377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.142467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.073 [2024-07-12 20:36:20.142486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.073 [2024-07-12 20:36:20.142498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:26.073 [2024-07-12 20:36:20.142509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.073 [2024-07-12 20:36:20.143922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.073 [2024-07-12 20:36:20.145103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.550 ms, result 0 00:19:26.073 [2024-07-12 20:36:20.145939] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.073 [2024-07-12 20:36:20.153960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.333  Copying: 4096/4096 [kB] (average 21 MBps)[2024-07-12 20:36:20.337735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.333 [2024-07-12 20:36:20.339501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.333 [2024-07-12 20:36:20.339541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:26.333 [2024-07-12 20:36:20.339577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.333 [2024-07-12 20:36:20.339589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.333 [2024-07-12 20:36:20.339618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:26.333 [2024-07-12 20:36:20.340482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.333 [2024-07-12 20:36:20.340509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:26.333 [2024-07-12 20:36:20.340524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:19:26.333 [2024-07-12 20:36:20.340534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.342366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.342419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:26.334 [2024-07-12 20:36:20.342448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:19:26.334 [2024-07-12 20:36:20.342460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.346183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.346224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:26.334 [2024-07-12 20:36:20.346251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:19:26.334 [2024-07-12 20:36:20.346266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.353120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.353173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:26.334 [2024-07-12 20:36:20.353203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.814 ms 00:19:26.334 [2024-07-12 20:36:20.353214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.354792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.354873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:26.334 [2024-07-12 20:36:20.354902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:19:26.334 [2024-07-12 20:36:20.354913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.358750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.358833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:26.334 [2024-07-12 20:36:20.358863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.775 ms 00:19:26.334 [2024-07-12 20:36:20.358874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.359034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.359054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:26.334 [2024-07-12 20:36:20.359066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:26.334 [2024-07-12 20:36:20.359077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.360866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.360918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:26.334 [2024-07-12 20:36:20.360949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:19:26.334 [2024-07-12 20:36:20.360959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.362477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.362516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:26.334 [2024-07-12 20:36:20.362530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:19:26.334 [2024-07-12 20:36:20.362541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.363682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.363734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:26.334 [2024-07-12 20:36:20.363748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:19:26.334 [2024-07-12 20:36:20.363758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.364966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.334 [2024-07-12 20:36:20.365004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:26.334 [2024-07-12 20:36:20.365018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:19:26.334 [2024-07-12 20:36:20.365029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.334 [2024-07-12 20:36:20.365066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:26.334 [2024-07-12 20:36:20.365089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:26.334 [2024-07-12 20:36:20.365852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.365991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:26.335 [2024-07-12 20:36:20.366323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:26.335 [2024-07-12 20:36:20.366335] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 292a0a4e-5585-47cf-8793-eea72c677626 00:19:26.335 [2024-07-12 20:36:20.366360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:26.335 [2024-07-12 20:36:20.366371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:26.335 [2024-07-12 20:36:20.366382] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:26.335 [2024-07-12 20:36:20.366398] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:26.335 [2024-07-12 20:36:20.366417] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:26.335 [2024-07-12 20:36:20.366433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:26.335 [2024-07-12 20:36:20.366444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:26.335 [2024-07-12 20:36:20.366454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:26.335 [2024-07-12 20:36:20.366464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:26.335 [2024-07-12 20:36:20.366475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.335 [2024-07-12 20:36:20.366499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:26.335 [2024-07-12 20:36:20.366511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:19:26.335 [2024-07-12 20:36:20.366523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.368705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.335 [2024-07-12 20:36:20.368756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:26.335 [2024-07-12 20:36:20.368770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:19:26.335 [2024-07-12 20:36:20.368790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.368916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.335 [2024-07-12 20:36:20.368930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:26.335 [2024-07-12 20:36:20.368941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:26.335 [2024-07-12 20:36:20.368952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.376425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.376481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.335 [2024-07-12 20:36:20.376511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.376522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.376602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.376619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.335 [2024-07-12 20:36:20.376630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.376640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.376691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.376723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.335 [2024-07-12 20:36:20.376741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.376753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.376776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.376789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.335 [2024-07-12 20:36:20.376800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.376810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.389357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.389427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.335 [2024-07-12 20:36:20.389459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.389470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.399639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.399704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.335 [2024-07-12 20:36:20.399736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.399748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.399817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.399839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:26.335 [2024-07-12 20:36:20.399850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.399869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.399905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.399918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:26.335 [2024-07-12 20:36:20.399929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.399939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.400043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.400061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:26.335 [2024-07-12 20:36:20.400074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.400090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.400155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.400172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:26.335 [2024-07-12 20:36:20.400184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.400195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.400243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.400286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:26.335 [2024-07-12 20:36:20.400312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.400327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.400391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.335 [2024-07-12 20:36:20.400408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:26.335 [2024-07-12 20:36:20.400420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.335 [2024-07-12 20:36:20.400431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.335 [2024-07-12 20:36:20.400612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.077 ms, result 0 00:19:26.594 00:19:26.594 00:19:26.594 20:36:20 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=92795 00:19:26.594 20:36:20 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:26.594 20:36:20 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 92795 00:19:26.594 20:36:20 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 92795 ']' 00:19:26.594 20:36:20 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.594 20:36:20 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.594 20:36:20 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.594 20:36:20 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.594 20:36:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:26.853 [2024-07-12 20:36:20.791942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:19:26.853 [2024-07-12 20:36:20.792141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92795 ] 00:19:26.853 [2024-07-12 20:36:20.934944] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:26.853 [2024-07-12 20:36:20.954672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.112 [2024-07-12 20:36:21.042569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.679 20:36:21 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.679 20:36:21 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:27.679 20:36:21 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:27.938 [2024-07-12 20:36:22.020347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.938 [2024-07-12 20:36:22.020465] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.199 [2024-07-12 20:36:22.173284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.199 [2024-07-12 20:36:22.173415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:28.199 [2024-07-12 20:36:22.173438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:28.199 [2024-07-12 20:36:22.173465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.176370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.176448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.200 [2024-07-12 20:36:22.176465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.874 ms 00:19:28.200 [2024-07-12 20:36:22.176479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.176721] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:28.200 [2024-07-12 20:36:22.177048] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:28.200 [2024-07-12 20:36:22.177093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.177113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.200 [2024-07-12 20:36:22.177127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:19:28.200 [2024-07-12 20:36:22.177157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.179319] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:28.200 [2024-07-12 20:36:22.182293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.182360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:28.200 [2024-07-12 20:36:22.182397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:19:28.200 [2024-07-12 20:36:22.182409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.182488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.182508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:28.200 [2024-07-12 20:36:22.182526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:28.200 [2024-07-12 20:36:22.182541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.191177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.191227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.200 [2024-07-12 20:36:22.191262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.556 ms 00:19:28.200 [2024-07-12 20:36:22.191277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.191453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.191496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.200 [2024-07-12 20:36:22.191526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:28.200 [2024-07-12 20:36:22.191538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.191595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.191624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:28.200 [2024-07-12 20:36:22.191642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:28.200 [2024-07-12 20:36:22.191663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.191705] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:28.200 [2024-07-12 20:36:22.193782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.193857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.200 [2024-07-12 20:36:22.193889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:19:28.200 [2024-07-12 20:36:22.193902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.193950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.193967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:28.200 [2024-07-12 20:36:22.193980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:28.200 [2024-07-12 20:36:22.194004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.194036] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:28.200 [2024-07-12 20:36:22.194090] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:28.200 [2024-07-12 20:36:22.194146] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:28.200 [2024-07-12 20:36:22.194178] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:28.200 [2024-07-12 20:36:22.194303] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:28.200 [2024-07-12 20:36:22.194327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:28.200 [2024-07-12 20:36:22.194343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:28.200 [2024-07-12 20:36:22.194371] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:28.200 [2024-07-12 20:36:22.194385] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:28.200 [2024-07-12 20:36:22.194404] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:28.200 [2024-07-12 20:36:22.194423] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:28.200 [2024-07-12 20:36:22.194441] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:28.200 [2024-07-12 20:36:22.194452] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:28.200 [2024-07-12 20:36:22.194466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.194478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:28.200 [2024-07-12 20:36:22.194492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:19:28.200 [2024-07-12 20:36:22.194503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.194601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.200 [2024-07-12 20:36:22.194618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:28.200 [2024-07-12 20:36:22.194633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:28.200 [2024-07-12 20:36:22.194645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.200 [2024-07-12 20:36:22.194771] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:28.200 [2024-07-12 20:36:22.194791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:28.200 [2024-07-12 20:36:22.194806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.200 [2024-07-12 20:36:22.194819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.200 [2024-07-12 20:36:22.194838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:28.200 [2024-07-12 20:36:22.194849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:28.200 [2024-07-12 20:36:22.194864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:28.200 [2024-07-12 20:36:22.194875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:28.200 [2024-07-12 20:36:22.194888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:28.200 [2024-07-12 20:36:22.194900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.200 [2024-07-12 20:36:22.194913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:28.200 [2024-07-12 20:36:22.194924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:28.200 [2024-07-12 20:36:22.194937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.200 [2024-07-12 20:36:22.194960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:28.200 [2024-07-12 20:36:22.194977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:28.200 [2024-07-12 20:36:22.194988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:28.200 [2024-07-12 20:36:22.195011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:28.200 [2024-07-12 20:36:22.195024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:28.200 [2024-07-12 20:36:22.195051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.200 [2024-07-12 20:36:22.195089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:28.200 [2024-07-12 20:36:22.195101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.200 [2024-07-12 20:36:22.195130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:28.200 [2024-07-12 20:36:22.195143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.200 [2024-07-12 20:36:22.195167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:28.200 [2024-07-12 20:36:22.195178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.200 [2024-07-12 20:36:22.195201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:28.200 [2024-07-12 20:36:22.195214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.200 [2024-07-12 20:36:22.195237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:28.200 [2024-07-12 20:36:22.195271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:28.200 [2024-07-12 20:36:22.195288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.200 [2024-07-12 20:36:22.195300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:28.200 [2024-07-12 20:36:22.195315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:28.200 [2024-07-12 20:36:22.195326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:28.200 [2024-07-12 20:36:22.195351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:28.200 [2024-07-12 20:36:22.195364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195374] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:28.200 [2024-07-12 20:36:22.195389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:28.200 [2024-07-12 20:36:22.195400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.200 [2024-07-12 20:36:22.195426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.200 [2024-07-12 20:36:22.195447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:28.200 [2024-07-12 20:36:22.195460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:28.200 [2024-07-12 20:36:22.195471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:28.200 [2024-07-12 20:36:22.195484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:28.200 [2024-07-12 20:36:22.195494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:28.201 [2024-07-12 20:36:22.195510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:28.201 [2024-07-12 20:36:22.195523] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:28.201 [2024-07-12 20:36:22.195540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.201 [2024-07-12 20:36:22.195557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:28.201 [2024-07-12 20:36:22.195571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:28.201 [2024-07-12 20:36:22.195583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:28.201 [2024-07-12 20:36:22.195597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:28.201 [2024-07-12 20:36:22.195609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:28.201 [2024-07-12 20:36:22.195624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:28.201 [2024-07-12 20:36:22.195635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:28.201 [2024-07-12 20:36:22.195649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:28.201 [2024-07-12 20:36:22.195660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:28.201 [2024-07-12 20:36:22.195674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:28.201 [2024-07-12 20:36:22.195686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:28.201 [2024-07-12 20:36:22.195700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:28.201 [2024-07-12 20:36:22.195711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:28.201 [2024-07-12 20:36:22.195728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:28.201 [2024-07-12 20:36:22.195740] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:28.201 [2024-07-12 20:36:22.195756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.201 [2024-07-12 20:36:22.195769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:28.201 [2024-07-12 20:36:22.195783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:28.201 [2024-07-12 20:36:22.195795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:28.201 [2024-07-12 20:36:22.195810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:28.201 [2024-07-12 20:36:22.195823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.195837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:28.201 [2024-07-12 20:36:22.195849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:19:28.201 [2024-07-12 20:36:22.195863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.211567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.211659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.201 [2024-07-12 20:36:22.211680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.603 ms 00:19:28.201 [2024-07-12 20:36:22.211698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.211866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.211901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:28.201 [2024-07-12 20:36:22.211933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:28.201 [2024-07-12 20:36:22.211963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.225812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.225899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.201 [2024-07-12 20:36:22.225935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.804 ms 00:19:28.201 [2024-07-12 20:36:22.225964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.226072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.226094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.201 [2024-07-12 20:36:22.226109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.201 [2024-07-12 20:36:22.226123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.226698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.226771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.201 [2024-07-12 20:36:22.226788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:19:28.201 [2024-07-12 20:36:22.226803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.227016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.227046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.201 [2024-07-12 20:36:22.227061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:19:28.201 [2024-07-12 20:36:22.227075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.236696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.236755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.201 [2024-07-12 20:36:22.236790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.590 ms 00:19:28.201 [2024-07-12 20:36:22.236804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.240271] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:28.201 [2024-07-12 20:36:22.240392] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:28.201 [2024-07-12 20:36:22.240437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.240453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:28.201 [2024-07-12 20:36:22.240465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:19:28.201 [2024-07-12 20:36:22.240478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.255450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.255558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:28.201 [2024-07-12 20:36:22.255593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.921 ms 00:19:28.201 [2024-07-12 20:36:22.255626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.257690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.257762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:28.201 [2024-07-12 20:36:22.257794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:19:28.201 [2024-07-12 20:36:22.257808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.259644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.259716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:28.201 [2024-07-12 20:36:22.259732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:19:28.201 [2024-07-12 20:36:22.259763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.260183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.260218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:28.201 [2024-07-12 20:36:22.260234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:19:28.201 [2024-07-12 20:36:22.260266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.293568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.293654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:28.201 [2024-07-12 20:36:22.293679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.267 ms 00:19:28.201 [2024-07-12 20:36:22.293698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.302347] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:28.201 [2024-07-12 20:36:22.323653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.323739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:28.201 [2024-07-12 20:36:22.323780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.767 ms 00:19:28.201 [2024-07-12 20:36:22.323808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.323941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.323963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:28.201 [2024-07-12 20:36:22.323980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:28.201 [2024-07-12 20:36:22.323991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.324099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.324117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:28.201 [2024-07-12 20:36:22.324141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:28.201 [2024-07-12 20:36:22.324153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.324190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.324203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:28.201 [2024-07-12 20:36:22.324224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:28.201 [2024-07-12 20:36:22.324236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.324282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:28.201 [2024-07-12 20:36:22.324336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.324355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:28.201 [2024-07-12 20:36:22.324368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:28.201 [2024-07-12 20:36:22.324382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.328961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.329039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:28.201 [2024-07-12 20:36:22.329056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:19:28.201 [2024-07-12 20:36:22.329075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.201 [2024-07-12 20:36:22.329169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.201 [2024-07-12 20:36:22.329194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:28.201 [2024-07-12 20:36:22.329211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:28.201 [2024-07-12 20:36:22.329224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.202 [2024-07-12 20:36:22.330549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.202 [2024-07-12 20:36:22.331841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.938 ms, result 0 00:19:28.202 [2024-07-12 20:36:22.333371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.202 Some configs were skipped because the RPC state that can call them passed over. 00:19:28.460 20:36:22 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:28.460 [2024-07-12 20:36:22.598063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.460 [2024-07-12 20:36:22.598162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:28.460 [2024-07-12 20:36:22.598204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:19:28.460 [2024-07-12 20:36:22.598218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.460 [2024-07-12 20:36:22.598326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.739 ms, result 0 00:19:28.460 true 00:19:28.719 20:36:22 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:28.719 [2024-07-12 20:36:22.830093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.719 [2024-07-12 20:36:22.830158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:28.719 [2024-07-12 20:36:22.830180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:19:28.719 [2024-07-12 20:36:22.830195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.719 [2024-07-12 20:36:22.830260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.444 ms, result 0 00:19:28.719 true 00:19:28.719 20:36:22 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 92795 00:19:28.719 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 92795 ']' 00:19:28.719 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 92795 00:19:28.719 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:28.719 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.719 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92795 00:19:28.979 killing process with pid 92795 00:19:28.979 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:28.979 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:28.979 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92795' 00:19:28.979 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 92795 00:19:28.979 20:36:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 92795 00:19:28.979 [2024-07-12 20:36:23.048722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.048791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.979 [2024-07-12 20:36:23.048846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:28.979 [2024-07-12 20:36:23.048866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.048903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.979 [2024-07-12 20:36:23.049716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.049738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.979 [2024-07-12 20:36:23.049753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:19:28.979 [2024-07-12 20:36:23.049766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.050059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.050081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.979 [2024-07-12 20:36:23.050094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:19:28.979 [2024-07-12 20:36:23.050106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.054118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.054165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.979 [2024-07-12 20:36:23.054184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.989 ms 00:19:28.979 [2024-07-12 20:36:23.054203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.061161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.061210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.979 [2024-07-12 20:36:23.061257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:19:28.979 [2024-07-12 20:36:23.061303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.062779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.062838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.979 [2024-07-12 20:36:23.062853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:19:28.979 [2024-07-12 20:36:23.062865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.066386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.066444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.979 [2024-07-12 20:36:23.066460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.481 ms 00:19:28.979 [2024-07-12 20:36:23.066476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.066624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.066647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.979 [2024-07-12 20:36:23.066659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:28.979 [2024-07-12 20:36:23.066670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.068971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.069029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:28.979 [2024-07-12 20:36:23.069044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:19:28.979 [2024-07-12 20:36:23.069060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.070715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.070756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:28.979 [2024-07-12 20:36:23.070771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:19:28.979 [2024-07-12 20:36:23.070784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.979 [2024-07-12 20:36:23.071934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.979 [2024-07-12 20:36:23.071992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.980 [2024-07-12 20:36:23.072007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:19:28.980 [2024-07-12 20:36:23.072019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.980 [2024-07-12 20:36:23.073248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.980 [2024-07-12 20:36:23.073333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.980 [2024-07-12 20:36:23.073351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:19:28.980 [2024-07-12 20:36:23.073365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.980 [2024-07-12 20:36:23.073421] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.980 [2024-07-12 20:36:23.073450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.073990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.074882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.075988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.980 [2024-07-12 20:36:23.076225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.981 [2024-07-12 20:36:23.076262] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.981 [2024-07-12 20:36:23.076289] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 292a0a4e-5585-47cf-8793-eea72c677626 00:19:28.981 [2024-07-12 20:36:23.076309] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.981 [2024-07-12 20:36:23.076320] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.981 [2024-07-12 20:36:23.076332] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.981 [2024-07-12 20:36:23.076347] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.981 [2024-07-12 20:36:23.076360] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.981 [2024-07-12 20:36:23.076371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.981 [2024-07-12 20:36:23.076383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.981 [2024-07-12 20:36:23.076393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.981 [2024-07-12 20:36:23.076405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.981 [2024-07-12 20:36:23.076417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.981 [2024-07-12 20:36:23.076432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.981 [2024-07-12 20:36:23.076444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:19:28.981 [2024-07-12 20:36:23.076460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.078614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.981 [2024-07-12 20:36:23.078648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.981 [2024-07-12 20:36:23.078664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:19:28.981 [2024-07-12 20:36:23.078677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.078868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.981 [2024-07-12 20:36:23.078888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.981 [2024-07-12 20:36:23.078901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:28.981 [2024-07-12 20:36:23.078914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.087461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.087513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.981 [2024-07-12 20:36:23.087546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.087586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.087689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.087726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.981 [2024-07-12 20:36:23.087739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.087753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.087812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.087834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.981 [2024-07-12 20:36:23.087846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.087858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.087884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.087899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.981 [2024-07-12 20:36:23.087909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.087921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.101332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.101456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.981 [2024-07-12 20:36:23.101491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.101506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.111967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.112038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.981 [2024-07-12 20:36:23.112056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.112073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.112162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.112183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.981 [2024-07-12 20:36:23.112195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.112207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.112246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.112315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.981 [2024-07-12 20:36:23.112330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.112343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.112465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.112492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.981 [2024-07-12 20:36:23.112505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.112518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.112569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.112606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.981 [2024-07-12 20:36:23.112619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.112636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.112698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.112741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.981 [2024-07-12 20:36:23.112757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.112770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.112829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.981 [2024-07-12 20:36:23.112850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.981 [2024-07-12 20:36:23.112862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.981 [2024-07-12 20:36:23.112875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.981 [2024-07-12 20:36:23.113060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.314 ms, result 0 00:19:29.548 20:36:23 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:29.548 [2024-07-12 20:36:23.489913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:19:29.548 [2024-07-12 20:36:23.490071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92837 ] 00:19:29.548 [2024-07-12 20:36:23.632432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:29.548 [2024-07-12 20:36:23.653390] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.806 [2024-07-12 20:36:23.725418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.806 [2024-07-12 20:36:23.849897] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.806 [2024-07-12 20:36:23.849999] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.066 [2024-07-12 20:36:24.010492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.010560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.066 [2024-07-12 20:36:24.010581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.066 [2024-07-12 20:36:24.010593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.013329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.013371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.066 [2024-07-12 20:36:24.013388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.708 ms 00:19:30.066 [2024-07-12 20:36:24.013410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.013536] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.066 [2024-07-12 20:36:24.013846] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.066 [2024-07-12 20:36:24.013884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.013898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.066 [2024-07-12 20:36:24.013910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:19:30.066 [2024-07-12 20:36:24.013921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.015971] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.066 [2024-07-12 20:36:24.019025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.019069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.066 [2024-07-12 20:36:24.019099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:19:30.066 [2024-07-12 20:36:24.019114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.019199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.019220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.066 [2024-07-12 20:36:24.019256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:30.066 [2024-07-12 20:36:24.019286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.028174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.028216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.066 [2024-07-12 20:36:24.028248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.829 ms 00:19:30.066 [2024-07-12 20:36:24.028301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.028465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.028493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.066 [2024-07-12 20:36:24.028524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:30.066 [2024-07-12 20:36:24.028536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.028579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.028596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.066 [2024-07-12 20:36:24.028608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:30.066 [2024-07-12 20:36:24.028620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.028652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:30.066 [2024-07-12 20:36:24.030882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.030917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.066 [2024-07-12 20:36:24.030949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:19:30.066 [2024-07-12 20:36:24.030987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.031041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.066 [2024-07-12 20:36:24.031058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.066 [2024-07-12 20:36:24.031075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:30.066 [2024-07-12 20:36:24.031086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.066 [2024-07-12 20:36:24.031113] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.066 [2024-07-12 20:36:24.031141] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.066 [2024-07-12 20:36:24.031186] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.066 [2024-07-12 20:36:24.031208] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:30.066 [2024-07-12 20:36:24.031345] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.066 [2024-07-12 20:36:24.031364] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.066 [2024-07-12 20:36:24.031380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:30.066 [2024-07-12 20:36:24.031394] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.067 [2024-07-12 20:36:24.031436] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.067 [2024-07-12 20:36:24.031463] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:30.067 [2024-07-12 20:36:24.031485] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.067 [2024-07-12 20:36:24.031496] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.067 [2024-07-12 20:36:24.031507] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.067 [2024-07-12 20:36:24.031526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.067 [2024-07-12 20:36:24.031564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.067 [2024-07-12 20:36:24.031578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:19:30.067 [2024-07-12 20:36:24.031596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.067 [2024-07-12 20:36:24.031694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.067 [2024-07-12 20:36:24.031709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.067 [2024-07-12 20:36:24.031721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:30.067 [2024-07-12 20:36:24.031748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.067 [2024-07-12 20:36:24.031857] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.067 [2024-07-12 20:36:24.031880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.067 [2024-07-12 20:36:24.031894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.067 [2024-07-12 20:36:24.031905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.067 [2024-07-12 20:36:24.031917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.067 [2024-07-12 20:36:24.031927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.067 [2024-07-12 20:36:24.031938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:30.067 [2024-07-12 20:36:24.031948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.067 [2024-07-12 20:36:24.031958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:30.067 [2024-07-12 20:36:24.031973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.067 [2024-07-12 20:36:24.031984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.067 [2024-07-12 20:36:24.031995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:30.067 [2024-07-12 20:36:24.032005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.067 [2024-07-12 20:36:24.032027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.067 [2024-07-12 20:36:24.032038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:30.067 [2024-07-12 20:36:24.032048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.067 [2024-07-12 20:36:24.032069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:30.067 [2024-07-12 20:36:24.032082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.067 [2024-07-12 20:36:24.032106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.067 [2024-07-12 20:36:24.032127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.067 [2024-07-12 20:36:24.032138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.067 [2024-07-12 20:36:24.032165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.067 [2024-07-12 20:36:24.032176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.067 [2024-07-12 20:36:24.032197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.067 [2024-07-12 20:36:24.032207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.067 [2024-07-12 20:36:24.032227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.067 [2024-07-12 20:36:24.032238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.067 [2024-07-12 20:36:24.032258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.067 [2024-07-12 20:36:24.032269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:30.067 [2024-07-12 20:36:24.032279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.067 [2024-07-12 20:36:24.032291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.067 [2024-07-12 20:36:24.032320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:30.067 [2024-07-12 20:36:24.032332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.067 [2024-07-12 20:36:24.032373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:30.067 [2024-07-12 20:36:24.032384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032394] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.067 [2024-07-12 20:36:24.032405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.067 [2024-07-12 20:36:24.032415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.067 [2024-07-12 20:36:24.032426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.067 [2024-07-12 20:36:24.032455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.067 [2024-07-12 20:36:24.032467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.067 [2024-07-12 20:36:24.032478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.067 [2024-07-12 20:36:24.032490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.067 [2024-07-12 20:36:24.032500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.067 [2024-07-12 20:36:24.032526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.067 [2024-07-12 20:36:24.032556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.067 [2024-07-12 20:36:24.032574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.067 [2024-07-12 20:36:24.032597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:30.067 [2024-07-12 20:36:24.032609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:30.067 [2024-07-12 20:36:24.032624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:30.067 [2024-07-12 20:36:24.032637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:30.067 [2024-07-12 20:36:24.032649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:30.067 [2024-07-12 20:36:24.032661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:30.067 [2024-07-12 20:36:24.032672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:30.067 [2024-07-12 20:36:24.032684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:30.067 [2024-07-12 20:36:24.032695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:30.067 [2024-07-12 20:36:24.032707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:30.067 [2024-07-12 20:36:24.032718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:30.067 [2024-07-12 20:36:24.032730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:30.067 [2024-07-12 20:36:24.032757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:30.067 [2024-07-12 20:36:24.032783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:30.067 [2024-07-12 20:36:24.032794] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.067 [2024-07-12 20:36:24.032806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.067 [2024-07-12 20:36:24.032827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.067 [2024-07-12 20:36:24.032838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.067 [2024-07-12 20:36:24.032853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.067 [2024-07-12 20:36:24.032865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.067 [2024-07-12 20:36:24.032877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.067 [2024-07-12 20:36:24.032888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.067 [2024-07-12 20:36:24.032899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:19:30.067 [2024-07-12 20:36:24.032918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.067 [2024-07-12 20:36:24.057823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.067 [2024-07-12 20:36:24.057923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.067 [2024-07-12 20:36:24.057962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.797 ms 00:19:30.067 [2024-07-12 20:36:24.057978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.067 [2024-07-12 20:36:24.058232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.067 [2024-07-12 20:36:24.058282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.067 [2024-07-12 20:36:24.058301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:30.067 [2024-07-12 20:36:24.058317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.067 [2024-07-12 20:36:24.071907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.067 [2024-07-12 20:36:24.071966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.067 [2024-07-12 20:36:24.072001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.527 ms 00:19:30.067 [2024-07-12 20:36:24.072013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.067 [2024-07-12 20:36:24.072135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.072154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.068 [2024-07-12 20:36:24.072173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.068 [2024-07-12 20:36:24.072186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.072759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.072795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.068 [2024-07-12 20:36:24.072819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:19:30.068 [2024-07-12 20:36:24.072835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.073007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.073028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.068 [2024-07-12 20:36:24.073041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:30.068 [2024-07-12 20:36:24.073052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.081884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.081926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.068 [2024-07-12 20:36:24.081971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.801 ms 00:19:30.068 [2024-07-12 20:36:24.081988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.085309] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:30.068 [2024-07-12 20:36:24.085379] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.068 [2024-07-12 20:36:24.085426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.085438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.068 [2024-07-12 20:36:24.085466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:19:30.068 [2024-07-12 20:36:24.085488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.100669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.100708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.068 [2024-07-12 20:36:24.100740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.124 ms 00:19:30.068 [2024-07-12 20:36:24.100758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.102764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.102803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.068 [2024-07-12 20:36:24.102818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.917 ms 00:19:30.068 [2024-07-12 20:36:24.102828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.104541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.104580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.068 [2024-07-12 20:36:24.104595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:19:30.068 [2024-07-12 20:36:24.104615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.105006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.105070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.068 [2024-07-12 20:36:24.105094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:19:30.068 [2024-07-12 20:36:24.105105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.126894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.126987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.068 [2024-07-12 20:36:24.127041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.740 ms 00:19:30.068 [2024-07-12 20:36:24.127064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.135225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:30.068 [2024-07-12 20:36:24.157438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.157505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.068 [2024-07-12 20:36:24.157525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.219 ms 00:19:30.068 [2024-07-12 20:36:24.157537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.157730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.157750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.068 [2024-07-12 20:36:24.157779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:30.068 [2024-07-12 20:36:24.157790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.157882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.157906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.068 [2024-07-12 20:36:24.157919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:30.068 [2024-07-12 20:36:24.157931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.157967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.157988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.068 [2024-07-12 20:36:24.158000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:30.068 [2024-07-12 20:36:24.158012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.158052] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.068 [2024-07-12 20:36:24.158069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.158081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.068 [2024-07-12 20:36:24.158093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:30.068 [2024-07-12 20:36:24.158104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.162430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.162474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.068 [2024-07-12 20:36:24.162501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:19:30.068 [2024-07-12 20:36:24.162514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.162620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.068 [2024-07-12 20:36:24.162654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.068 [2024-07-12 20:36:24.162667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:30.068 [2024-07-12 20:36:24.162678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.068 [2024-07-12 20:36:24.163872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.068 [2024-07-12 20:36:24.165131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.033 ms, result 0 00:19:30.068 [2024-07-12 20:36:24.166094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.068 [2024-07-12 20:36:24.174182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.603  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (24 MBps) Copying: 75/256 [MB] (24 MBps) Copying: 100/256 [MB] (24 MBps) Copying: 125/256 [MB] (24 MBps) Copying: 151/256 [MB] (25 MBps) Copying: 176/256 [MB] (24 MBps) Copying: 200/256 [MB] (24 MBps) Copying: 225/256 [MB] (24 MBps) Copying: 251/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-12 20:36:34.743205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.603 [2024-07-12 20:36:34.744922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.603 [2024-07-12 20:36:34.744970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:40.603 [2024-07-12 20:36:34.744990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.603 [2024-07-12 20:36:34.745002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.603 [2024-07-12 20:36:34.745034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:40.603 [2024-07-12 20:36:34.745863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.603 [2024-07-12 20:36:34.745894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:40.603 [2024-07-12 20:36:34.745909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:19:40.603 [2024-07-12 20:36:34.745920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.603 [2024-07-12 20:36:34.746223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.603 [2024-07-12 20:36:34.746264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:40.603 [2024-07-12 20:36:34.746279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:40.603 [2024-07-12 20:36:34.746291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.603 [2024-07-12 20:36:34.750365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.603 [2024-07-12 20:36:34.750408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:40.603 [2024-07-12 20:36:34.750426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:19:40.603 [2024-07-12 20:36:34.750438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.863 [2024-07-12 20:36:34.758366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.863 [2024-07-12 20:36:34.758449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:40.863 [2024-07-12 20:36:34.758484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.886 ms 00:19:40.863 [2024-07-12 20:36:34.758497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.863 [2024-07-12 20:36:34.760506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.863 [2024-07-12 20:36:34.760558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:40.863 [2024-07-12 20:36:34.760575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:19:40.863 [2024-07-12 20:36:34.760586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.863 [2024-07-12 20:36:34.764497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.863 [2024-07-12 20:36:34.764544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.863 [2024-07-12 20:36:34.764601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.880 ms 00:19:40.863 [2024-07-12 20:36:34.764613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.863 [2024-07-12 20:36:34.764767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.863 [2024-07-12 20:36:34.764801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.864 [2024-07-12 20:36:34.764816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:40.864 [2024-07-12 20:36:34.764827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.864 [2024-07-12 20:36:34.766824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.864 [2024-07-12 20:36:34.766891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:40.864 [2024-07-12 20:36:34.766923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:19:40.864 [2024-07-12 20:36:34.766935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.864 [2024-07-12 20:36:34.768325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.864 [2024-07-12 20:36:34.768449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:40.864 [2024-07-12 20:36:34.768464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:19:40.864 [2024-07-12 20:36:34.768475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.864 [2024-07-12 20:36:34.769503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.864 [2024-07-12 20:36:34.769540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.864 [2024-07-12 20:36:34.769555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:19:40.864 [2024-07-12 20:36:34.769566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.864 [2024-07-12 20:36:34.770510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.864 [2024-07-12 20:36:34.770545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.864 [2024-07-12 20:36:34.770561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:19:40.864 [2024-07-12 20:36:34.770571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.864 [2024-07-12 20:36:34.770596] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.864 [2024-07-12 20:36:34.770617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.770981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.864 [2024-07-12 20:36:34.771370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.771895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.772165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.865 [2024-07-12 20:36:34.772188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.865 [2024-07-12 20:36:34.772200] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 292a0a4e-5585-47cf-8793-eea72c677626 00:19:40.865 [2024-07-12 20:36:34.772213] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:40.865 [2024-07-12 20:36:34.772226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.865 [2024-07-12 20:36:34.772265] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.865 [2024-07-12 20:36:34.772283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.865 [2024-07-12 20:36:34.772294] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.865 [2024-07-12 20:36:34.772310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.865 [2024-07-12 20:36:34.772322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.865 [2024-07-12 20:36:34.772332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.865 [2024-07-12 20:36:34.772342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.865 [2024-07-12 20:36:34.772354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.865 [2024-07-12 20:36:34.772377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.865 [2024-07-12 20:36:34.772402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:19:40.865 [2024-07-12 20:36:34.772414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.865 [2024-07-12 20:36:34.774785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.865 [2024-07-12 20:36:34.774930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.865 [2024-07-12 20:36:34.775051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:19:40.865 [2024-07-12 20:36:34.775166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.865 [2024-07-12 20:36:34.775356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.865 [2024-07-12 20:36:34.775421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.865 [2024-07-12 20:36:34.775519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:40.865 [2024-07-12 20:36:34.775566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.865 [2024-07-12 20:36:34.783376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.865 [2024-07-12 20:36:34.783654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.865 [2024-07-12 20:36:34.783764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.865 [2024-07-12 20:36:34.783812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.865 [2024-07-12 20:36:34.784007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.865 [2024-07-12 20:36:34.784122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.865 [2024-07-12 20:36:34.784224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.865 [2024-07-12 20:36:34.784364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.865 [2024-07-12 20:36:34.784473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.865 [2024-07-12 20:36:34.784493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.865 [2024-07-12 20:36:34.784513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.865 [2024-07-12 20:36:34.784525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.784562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.784576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.866 [2024-07-12 20:36:34.784588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.784600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.800628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.800704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.866 [2024-07-12 20:36:34.800726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.800738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.811517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.811586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.866 [2024-07-12 20:36:34.811606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.811619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.811708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.811725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.866 [2024-07-12 20:36:34.811739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.811758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.811796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.811810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.866 [2024-07-12 20:36:34.811822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.811834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.811930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.811949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.866 [2024-07-12 20:36:34.811962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.811973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.812037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.812055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.866 [2024-07-12 20:36:34.812068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.812079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.812137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.812167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.866 [2024-07-12 20:36:34.812182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.812193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.812272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.866 [2024-07-12 20:36:34.812291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.866 [2024-07-12 20:36:34.812303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.866 [2024-07-12 20:36:34.812315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.866 [2024-07-12 20:36:34.812493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.537 ms, result 0 00:19:41.126 00:19:41.126 00:19:41.126 20:36:35 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:41.694 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:41.694 20:36:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:41.694 20:36:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:41.694 20:36:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:41.694 20:36:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.694 20:36:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:41.694 20:36:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:41.694 Process with pid 92795 is not found 00:19:41.694 20:36:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 92795 00:19:41.694 20:36:35 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 92795 ']' 00:19:41.694 20:36:35 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 92795 00:19:41.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (92795) - No such process 00:19:41.694 20:36:35 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 92795 is not found' 00:19:41.694 00:19:41.694 real 0m56.167s 00:19:41.694 user 1m16.181s 00:19:41.694 sys 0m7.068s 00:19:41.694 20:36:35 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.694 ************************************ 00:19:41.694 END TEST ftl_trim 00:19:41.694 ************************************ 00:19:41.694 20:36:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:41.694 20:36:35 ftl -- common/autotest_common.sh@1142 -- # return 0 00:19:41.694 20:36:35 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:41.694 20:36:35 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:41.694 20:36:35 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.694 20:36:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:41.694 ************************************ 00:19:41.694 START TEST ftl_restore 00:19:41.694 ************************************ 00:19:41.694 20:36:35 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:41.953 * Looking for test storage... 00:19:41.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.UWIdGS93MW 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=93018 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 93018 00:19:41.953 20:36:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.953 20:36:35 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 93018 ']' 00:19:41.953 20:36:35 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.953 20:36:35 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.953 20:36:35 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.953 20:36:35 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.953 20:36:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:41.953 [2024-07-12 20:36:36.055504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:19:41.953 [2024-07-12 20:36:36.055901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93018 ] 00:19:42.212 [2024-07-12 20:36:36.208147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:42.212 [2024-07-12 20:36:36.222918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.212 [2024-07-12 20:36:36.301006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.169 20:36:36 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.169 20:36:36 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:19:43.169 20:36:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:43.169 20:36:36 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:43.169 20:36:36 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:43.169 20:36:36 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:43.169 20:36:36 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:43.169 20:36:36 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:43.169 20:36:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:43.169 20:36:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:43.169 20:36:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:43.169 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:43.169 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:43.169 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:43.169 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:43.169 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:43.736 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:43.736 { 00:19:43.736 "name": "nvme0n1", 00:19:43.736 "aliases": [ 00:19:43.736 "627117ce-e122-4f22-9eaf-99edec26b429" 00:19:43.736 ], 00:19:43.736 "product_name": "NVMe disk", 00:19:43.736 "block_size": 4096, 00:19:43.736 "num_blocks": 1310720, 00:19:43.736 "uuid": "627117ce-e122-4f22-9eaf-99edec26b429", 00:19:43.736 "assigned_rate_limits": { 00:19:43.736 "rw_ios_per_sec": 0, 00:19:43.736 "rw_mbytes_per_sec": 0, 00:19:43.736 "r_mbytes_per_sec": 0, 00:19:43.736 "w_mbytes_per_sec": 0 00:19:43.736 }, 00:19:43.736 "claimed": true, 00:19:43.736 "claim_type": "read_many_write_one", 00:19:43.736 "zoned": false, 00:19:43.736 "supported_io_types": { 00:19:43.736 "read": true, 00:19:43.736 "write": true, 00:19:43.736 "unmap": true, 00:19:43.736 "flush": true, 00:19:43.736 "reset": true, 00:19:43.736 "nvme_admin": true, 00:19:43.736 "nvme_io": true, 00:19:43.736 "nvme_io_md": false, 00:19:43.736 "write_zeroes": true, 00:19:43.736 "zcopy": false, 00:19:43.736 "get_zone_info": false, 00:19:43.736 "zone_management": false, 00:19:43.736 "zone_append": false, 00:19:43.736 "compare": true, 00:19:43.736 "compare_and_write": false, 00:19:43.736 "abort": true, 00:19:43.736 "seek_hole": false, 00:19:43.736 "seek_data": false, 00:19:43.736 "copy": true, 00:19:43.736 "nvme_iov_md": false 00:19:43.736 }, 00:19:43.736 "driver_specific": { 00:19:43.736 "nvme": [ 00:19:43.736 { 00:19:43.736 "pci_address": "0000:00:11.0", 00:19:43.736 "trid": { 00:19:43.736 "trtype": "PCIe", 00:19:43.736 "traddr": "0000:00:11.0" 00:19:43.736 }, 00:19:43.736 "ctrlr_data": { 00:19:43.736 "cntlid": 0, 00:19:43.736 "vendor_id": "0x1b36", 00:19:43.736 "model_number": "QEMU NVMe Ctrl", 00:19:43.736 "serial_number": "12341", 00:19:43.736 "firmware_revision": "8.0.0", 00:19:43.736 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:43.736 "oacs": { 00:19:43.736 "security": 0, 00:19:43.736 "format": 1, 00:19:43.736 "firmware": 0, 00:19:43.736 "ns_manage": 1 00:19:43.736 }, 00:19:43.736 "multi_ctrlr": false, 00:19:43.736 "ana_reporting": false 00:19:43.736 }, 00:19:43.736 "vs": { 00:19:43.736 "nvme_version": "1.4" 00:19:43.736 }, 00:19:43.736 "ns_data": { 00:19:43.736 "id": 1, 00:19:43.736 "can_share": false 00:19:43.736 } 00:19:43.736 } 00:19:43.736 ], 00:19:43.736 "mp_policy": "active_passive" 00:19:43.736 } 00:19:43.736 } 00:19:43.736 ]' 00:19:43.736 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:43.736 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:43.736 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:43.736 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:43.736 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:43.736 20:36:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:43.736 20:36:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:43.736 20:36:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:43.736 20:36:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:43.736 20:36:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:43.736 20:36:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:43.994 20:36:37 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1ef124a2-5cd2-4b56-85ed-4f5db32fa749 00:19:43.994 20:36:37 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:43.994 20:36:37 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ef124a2-5cd2-4b56-85ed-4f5db32fa749 00:19:44.252 20:36:38 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:44.510 20:36:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=104156dd-fd6a-41cf-81ae-968e98661e91 00:19:44.510 20:36:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 104156dd-fd6a-41cf-81ae-968e98661e91 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=3487cfcf-19a8-4196-8832-7970195c512b 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3487cfcf-19a8-4196-8832-7970195c512b 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=3487cfcf-19a8-4196-8832-7970195c512b 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:44.768 20:36:38 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 3487cfcf-19a8-4196-8832-7970195c512b 00:19:44.768 20:36:38 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=3487cfcf-19a8-4196-8832-7970195c512b 00:19:44.768 20:36:38 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:44.768 20:36:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:44.768 20:36:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:44.768 20:36:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3487cfcf-19a8-4196-8832-7970195c512b 00:19:45.052 20:36:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:45.053 { 00:19:45.053 "name": "3487cfcf-19a8-4196-8832-7970195c512b", 00:19:45.053 "aliases": [ 00:19:45.053 "lvs/nvme0n1p0" 00:19:45.053 ], 00:19:45.053 "product_name": "Logical Volume", 00:19:45.053 "block_size": 4096, 00:19:45.053 "num_blocks": 26476544, 00:19:45.053 "uuid": "3487cfcf-19a8-4196-8832-7970195c512b", 00:19:45.053 "assigned_rate_limits": { 00:19:45.053 "rw_ios_per_sec": 0, 00:19:45.053 "rw_mbytes_per_sec": 0, 00:19:45.053 "r_mbytes_per_sec": 0, 00:19:45.053 "w_mbytes_per_sec": 0 00:19:45.053 }, 00:19:45.053 "claimed": false, 00:19:45.053 "zoned": false, 00:19:45.053 "supported_io_types": { 00:19:45.053 "read": true, 00:19:45.053 "write": true, 00:19:45.053 "unmap": true, 00:19:45.053 "flush": false, 00:19:45.053 "reset": true, 00:19:45.053 "nvme_admin": false, 00:19:45.053 "nvme_io": false, 00:19:45.053 "nvme_io_md": false, 00:19:45.053 "write_zeroes": true, 00:19:45.053 "zcopy": false, 00:19:45.053 "get_zone_info": false, 00:19:45.053 "zone_management": false, 00:19:45.053 "zone_append": false, 00:19:45.053 "compare": false, 00:19:45.053 "compare_and_write": false, 00:19:45.053 "abort": false, 00:19:45.053 "seek_hole": true, 00:19:45.053 "seek_data": true, 00:19:45.053 "copy": false, 00:19:45.053 "nvme_iov_md": false 00:19:45.053 }, 00:19:45.053 "driver_specific": { 00:19:45.053 "lvol": { 00:19:45.053 "lvol_store_uuid": "104156dd-fd6a-41cf-81ae-968e98661e91", 00:19:45.053 "base_bdev": "nvme0n1", 00:19:45.053 "thin_provision": true, 00:19:45.053 "num_allocated_clusters": 0, 00:19:45.053 "snapshot": false, 00:19:45.053 "clone": false, 00:19:45.053 "esnap_clone": false 00:19:45.053 } 00:19:45.053 } 00:19:45.053 } 00:19:45.053 ]' 00:19:45.053 20:36:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:45.053 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:45.053 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:45.053 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:45.053 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:45.053 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:45.053 20:36:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.053 20:36:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.053 20:36:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:45.315 20:36:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:45.315 20:36:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:45.315 20:36:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 3487cfcf-19a8-4196-8832-7970195c512b 00:19:45.315 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=3487cfcf-19a8-4196-8832-7970195c512b 00:19:45.315 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:45.315 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:45.315 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:45.315 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3487cfcf-19a8-4196-8832-7970195c512b 00:19:45.572 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:45.572 { 00:19:45.572 "name": "3487cfcf-19a8-4196-8832-7970195c512b", 00:19:45.572 "aliases": [ 00:19:45.572 "lvs/nvme0n1p0" 00:19:45.572 ], 00:19:45.572 "product_name": "Logical Volume", 00:19:45.572 "block_size": 4096, 00:19:45.572 "num_blocks": 26476544, 00:19:45.572 "uuid": "3487cfcf-19a8-4196-8832-7970195c512b", 00:19:45.572 "assigned_rate_limits": { 00:19:45.572 "rw_ios_per_sec": 0, 00:19:45.572 "rw_mbytes_per_sec": 0, 00:19:45.572 "r_mbytes_per_sec": 0, 00:19:45.572 "w_mbytes_per_sec": 0 00:19:45.572 }, 00:19:45.572 "claimed": false, 00:19:45.572 "zoned": false, 00:19:45.572 "supported_io_types": { 00:19:45.572 "read": true, 00:19:45.572 "write": true, 00:19:45.572 "unmap": true, 00:19:45.572 "flush": false, 00:19:45.572 "reset": true, 00:19:45.572 "nvme_admin": false, 00:19:45.572 "nvme_io": false, 00:19:45.572 "nvme_io_md": false, 00:19:45.572 "write_zeroes": true, 00:19:45.572 "zcopy": false, 00:19:45.572 "get_zone_info": false, 00:19:45.572 "zone_management": false, 00:19:45.572 "zone_append": false, 00:19:45.572 "compare": false, 00:19:45.572 "compare_and_write": false, 00:19:45.572 "abort": false, 00:19:45.572 "seek_hole": true, 00:19:45.573 "seek_data": true, 00:19:45.573 "copy": false, 00:19:45.573 "nvme_iov_md": false 00:19:45.573 }, 00:19:45.573 "driver_specific": { 00:19:45.573 "lvol": { 00:19:45.573 "lvol_store_uuid": "104156dd-fd6a-41cf-81ae-968e98661e91", 00:19:45.573 "base_bdev": "nvme0n1", 00:19:45.573 "thin_provision": true, 00:19:45.573 "num_allocated_clusters": 0, 00:19:45.573 "snapshot": false, 00:19:45.573 "clone": false, 00:19:45.573 "esnap_clone": false 00:19:45.573 } 00:19:45.573 } 00:19:45.573 } 00:19:45.573 ]' 00:19:45.573 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:45.573 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:45.573 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:45.830 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:45.830 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:45.830 20:36:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:45.830 20:36:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:45.830 20:36:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.088 20:36:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:46.088 20:36:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 3487cfcf-19a8-4196-8832-7970195c512b 00:19:46.088 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=3487cfcf-19a8-4196-8832-7970195c512b 00:19:46.088 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:46.088 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:46.088 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:46.088 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3487cfcf-19a8-4196-8832-7970195c512b 00:19:46.346 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:46.346 { 00:19:46.346 "name": "3487cfcf-19a8-4196-8832-7970195c512b", 00:19:46.346 "aliases": [ 00:19:46.346 "lvs/nvme0n1p0" 00:19:46.346 ], 00:19:46.346 "product_name": "Logical Volume", 00:19:46.346 "block_size": 4096, 00:19:46.346 "num_blocks": 26476544, 00:19:46.346 "uuid": "3487cfcf-19a8-4196-8832-7970195c512b", 00:19:46.346 "assigned_rate_limits": { 00:19:46.346 "rw_ios_per_sec": 0, 00:19:46.346 "rw_mbytes_per_sec": 0, 00:19:46.346 "r_mbytes_per_sec": 0, 00:19:46.346 "w_mbytes_per_sec": 0 00:19:46.346 }, 00:19:46.346 "claimed": false, 00:19:46.346 "zoned": false, 00:19:46.346 "supported_io_types": { 00:19:46.346 "read": true, 00:19:46.346 "write": true, 00:19:46.346 "unmap": true, 00:19:46.346 "flush": false, 00:19:46.346 "reset": true, 00:19:46.346 "nvme_admin": false, 00:19:46.346 "nvme_io": false, 00:19:46.346 "nvme_io_md": false, 00:19:46.346 "write_zeroes": true, 00:19:46.346 "zcopy": false, 00:19:46.346 "get_zone_info": false, 00:19:46.346 "zone_management": false, 00:19:46.346 "zone_append": false, 00:19:46.346 "compare": false, 00:19:46.346 "compare_and_write": false, 00:19:46.346 "abort": false, 00:19:46.346 "seek_hole": true, 00:19:46.346 "seek_data": true, 00:19:46.346 "copy": false, 00:19:46.346 "nvme_iov_md": false 00:19:46.346 }, 00:19:46.346 "driver_specific": { 00:19:46.346 "lvol": { 00:19:46.346 "lvol_store_uuid": "104156dd-fd6a-41cf-81ae-968e98661e91", 00:19:46.346 "base_bdev": "nvme0n1", 00:19:46.346 "thin_provision": true, 00:19:46.346 "num_allocated_clusters": 0, 00:19:46.346 "snapshot": false, 00:19:46.346 "clone": false, 00:19:46.346 "esnap_clone": false 00:19:46.346 } 00:19:46.346 } 00:19:46.346 } 00:19:46.346 ]' 00:19:46.346 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:46.346 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:46.346 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:46.346 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:46.346 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:46.346 20:36:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:46.346 20:36:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:46.346 20:36:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3487cfcf-19a8-4196-8832-7970195c512b --l2p_dram_limit 10' 00:19:46.346 20:36:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:46.346 20:36:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:46.346 20:36:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:46.346 20:36:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:46.346 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:46.346 20:36:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3487cfcf-19a8-4196-8832-7970195c512b --l2p_dram_limit 10 -c nvc0n1p0 00:19:46.606 [2024-07-12 20:36:40.549726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.549806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.606 [2024-07-12 20:36:40.549829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:46.606 [2024-07-12 20:36:40.549854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.606 [2024-07-12 20:36:40.549935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.549960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.606 [2024-07-12 20:36:40.549974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:46.606 [2024-07-12 20:36:40.549991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.606 [2024-07-12 20:36:40.550034] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.606 [2024-07-12 20:36:40.550432] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.606 [2024-07-12 20:36:40.550460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.550478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.606 [2024-07-12 20:36:40.550491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:19:46.606 [2024-07-12 20:36:40.550505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.606 [2024-07-12 20:36:40.550699] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3bb011f9-c31c-4127-9322-4bfe0622a87b 00:19:46.606 [2024-07-12 20:36:40.552579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.552621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:46.606 [2024-07-12 20:36:40.552643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:46.606 [2024-07-12 20:36:40.552656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.606 [2024-07-12 20:36:40.562228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.562304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.606 [2024-07-12 20:36:40.562333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.506 ms 00:19:46.606 [2024-07-12 20:36:40.562346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.606 [2024-07-12 20:36:40.562498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.562518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.606 [2024-07-12 20:36:40.562536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:46.606 [2024-07-12 20:36:40.562550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.606 [2024-07-12 20:36:40.562698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.562729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.606 [2024-07-12 20:36:40.562748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:46.606 [2024-07-12 20:36:40.562760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.606 [2024-07-12 20:36:40.562800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.606 [2024-07-12 20:36:40.565111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.606 [2024-07-12 20:36:40.565169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.606 [2024-07-12 20:36:40.565186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:19:46.606 [2024-07-12 20:36:40.565200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.607 [2024-07-12 20:36:40.565265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.607 [2024-07-12 20:36:40.565288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.607 [2024-07-12 20:36:40.565302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:46.607 [2024-07-12 20:36:40.565319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.607 [2024-07-12 20:36:40.565346] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:46.607 [2024-07-12 20:36:40.565543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.607 [2024-07-12 20:36:40.565565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.607 [2024-07-12 20:36:40.565584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:46.607 [2024-07-12 20:36:40.565600] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.607 [2024-07-12 20:36:40.565622] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.607 [2024-07-12 20:36:40.565643] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.607 [2024-07-12 20:36:40.565664] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.607 [2024-07-12 20:36:40.565684] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.607 [2024-07-12 20:36:40.565707] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.607 [2024-07-12 20:36:40.565724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.607 [2024-07-12 20:36:40.565738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.607 [2024-07-12 20:36:40.565751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:19:46.607 [2024-07-12 20:36:40.565765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.607 [2024-07-12 20:36:40.565874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.607 [2024-07-12 20:36:40.565918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.607 [2024-07-12 20:36:40.565940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:46.607 [2024-07-12 20:36:40.565966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.607 [2024-07-12 20:36:40.566097] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.607 [2024-07-12 20:36:40.566122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.607 [2024-07-12 20:36:40.566136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.607 [2024-07-12 20:36:40.566190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.607 [2024-07-12 20:36:40.566251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.607 [2024-07-12 20:36:40.566279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.607 [2024-07-12 20:36:40.566293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.607 [2024-07-12 20:36:40.566304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.607 [2024-07-12 20:36:40.566321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.607 [2024-07-12 20:36:40.566333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.607 [2024-07-12 20:36:40.566346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.607 [2024-07-12 20:36:40.566371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.607 [2024-07-12 20:36:40.566406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.607 [2024-07-12 20:36:40.566444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.607 [2024-07-12 20:36:40.566478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.607 [2024-07-12 20:36:40.566519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.607 [2024-07-12 20:36:40.566561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.607 [2024-07-12 20:36:40.566587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.607 [2024-07-12 20:36:40.566600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.607 [2024-07-12 20:36:40.566611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.607 [2024-07-12 20:36:40.566625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.607 [2024-07-12 20:36:40.566636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.607 [2024-07-12 20:36:40.566649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.607 [2024-07-12 20:36:40.566673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.607 [2024-07-12 20:36:40.566684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566697] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.607 [2024-07-12 20:36:40.566709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.607 [2024-07-12 20:36:40.566725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.607 [2024-07-12 20:36:40.566754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.607 [2024-07-12 20:36:40.566766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.607 [2024-07-12 20:36:40.566779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.607 [2024-07-12 20:36:40.566790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.607 [2024-07-12 20:36:40.566803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.607 [2024-07-12 20:36:40.566814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.607 [2024-07-12 20:36:40.566833] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.607 [2024-07-12 20:36:40.566847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.607 [2024-07-12 20:36:40.566865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.607 [2024-07-12 20:36:40.566877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.607 [2024-07-12 20:36:40.566892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.607 [2024-07-12 20:36:40.566904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.607 [2024-07-12 20:36:40.566918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.607 [2024-07-12 20:36:40.566929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.607 [2024-07-12 20:36:40.566946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.607 [2024-07-12 20:36:40.566958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.607 [2024-07-12 20:36:40.566986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.607 [2024-07-12 20:36:40.567001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.607 [2024-07-12 20:36:40.567015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.607 [2024-07-12 20:36:40.567027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.607 [2024-07-12 20:36:40.567042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.607 [2024-07-12 20:36:40.567054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.607 [2024-07-12 20:36:40.567069] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.607 [2024-07-12 20:36:40.567081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.607 [2024-07-12 20:36:40.567097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.607 [2024-07-12 20:36:40.567109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.607 [2024-07-12 20:36:40.567123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.607 [2024-07-12 20:36:40.567136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.607 [2024-07-12 20:36:40.567152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.607 [2024-07-12 20:36:40.567164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.607 [2024-07-12 20:36:40.567181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:19:46.607 [2024-07-12 20:36:40.567201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.607 [2024-07-12 20:36:40.567301] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:46.607 [2024-07-12 20:36:40.567323] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:49.173 [2024-07-12 20:36:42.839147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.839222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:49.173 [2024-07-12 20:36:42.839278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2271.835 ms 00:19:49.173 [2024-07-12 20:36:42.839293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.854244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.854310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.173 [2024-07-12 20:36:42.854351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.821 ms 00:19:49.173 [2024-07-12 20:36:42.854367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.854511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.854528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.173 [2024-07-12 20:36:42.854544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:49.173 [2024-07-12 20:36:42.854557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.867961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.868011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.173 [2024-07-12 20:36:42.868034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.319 ms 00:19:49.173 [2024-07-12 20:36:42.868059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.868116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.868131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.173 [2024-07-12 20:36:42.868147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.173 [2024-07-12 20:36:42.868170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.868810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.868836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.173 [2024-07-12 20:36:42.868854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:19:49.173 [2024-07-12 20:36:42.868875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.869039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.869055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.173 [2024-07-12 20:36:42.869072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:49.173 [2024-07-12 20:36:42.869084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.878437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.878496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.173 [2024-07-12 20:36:42.878517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.322 ms 00:19:49.173 [2024-07-12 20:36:42.878529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.889257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:49.173 [2024-07-12 20:36:42.893380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.893431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.173 [2024-07-12 20:36:42.893449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.731 ms 00:19:49.173 [2024-07-12 20:36:42.893474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.963396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.963491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:49.173 [2024-07-12 20:36:42.963514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.879 ms 00:19:49.173 [2024-07-12 20:36:42.963533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.963772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.963797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:49.173 [2024-07-12 20:36:42.963811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:19:49.173 [2024-07-12 20:36:42.963826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.967648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.967711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:49.173 [2024-07-12 20:36:42.967739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.793 ms 00:19:49.173 [2024-07-12 20:36:42.967754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.970884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.970933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:49.173 [2024-07-12 20:36:42.970951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.083 ms 00:19:49.173 [2024-07-12 20:36:42.970965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:42.971442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:42.971473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.173 [2024-07-12 20:36:42.971488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:19:49.173 [2024-07-12 20:36:42.971505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:43.006456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:43.006532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:49.173 [2024-07-12 20:36:43.006558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.921 ms 00:19:49.173 [2024-07-12 20:36:43.006574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:43.011614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:43.011672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:49.173 [2024-07-12 20:36:43.011691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.988 ms 00:19:49.173 [2024-07-12 20:36:43.011706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:43.015212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:43.015271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:49.173 [2024-07-12 20:36:43.015289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.458 ms 00:19:49.173 [2024-07-12 20:36:43.015303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:43.019186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:43.019253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.173 [2024-07-12 20:36:43.019272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.838 ms 00:19:49.173 [2024-07-12 20:36:43.019291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:43.019354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:43.019380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.173 [2024-07-12 20:36:43.019394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:49.173 [2024-07-12 20:36:43.019408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:43.019501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.173 [2024-07-12 20:36:43.019521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.173 [2024-07-12 20:36:43.019535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:49.173 [2024-07-12 20:36:43.019562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.173 [2024-07-12 20:36:43.020833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2470.617 ms, result 0 00:19:49.173 { 00:19:49.173 "name": "ftl0", 00:19:49.173 "uuid": "3bb011f9-c31c-4127-9322-4bfe0622a87b" 00:19:49.173 } 00:19:49.173 20:36:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:49.173 20:36:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:49.173 20:36:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:49.431 20:36:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:49.689 [2024-07-12 20:36:43.583118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.583189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.689 [2024-07-12 20:36:43.583217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:49.689 [2024-07-12 20:36:43.583231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.583292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:49.689 [2024-07-12 20:36:43.584140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.584172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.689 [2024-07-12 20:36:43.584186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:19:49.689 [2024-07-12 20:36:43.584201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.584496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.584525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.689 [2024-07-12 20:36:43.584543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:49.689 [2024-07-12 20:36:43.584568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.587773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.587808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.689 [2024-07-12 20:36:43.587839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:19:49.689 [2024-07-12 20:36:43.587852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.594374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.594424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.689 [2024-07-12 20:36:43.594448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.496 ms 00:19:49.689 [2024-07-12 20:36:43.594468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.596028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.596081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.689 [2024-07-12 20:36:43.596098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:19:49.689 [2024-07-12 20:36:43.596112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.600518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.600568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:49.689 [2024-07-12 20:36:43.600586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:19:49.689 [2024-07-12 20:36:43.600600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.600740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.600765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:49.689 [2024-07-12 20:36:43.600779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:49.689 [2024-07-12 20:36:43.600793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.602655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.689 [2024-07-12 20:36:43.602699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:49.689 [2024-07-12 20:36:43.602715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:19:49.689 [2024-07-12 20:36:43.602729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.689 [2024-07-12 20:36:43.604208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.690 [2024-07-12 20:36:43.604268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:49.690 [2024-07-12 20:36:43.604285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:19:49.690 [2024-07-12 20:36:43.604299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.690 [2024-07-12 20:36:43.605447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.690 [2024-07-12 20:36:43.605492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:49.690 [2024-07-12 20:36:43.605508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:19:49.690 [2024-07-12 20:36:43.605521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.690 [2024-07-12 20:36:43.606618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.690 [2024-07-12 20:36:43.606662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:49.690 [2024-07-12 20:36:43.606678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:19:49.690 [2024-07-12 20:36:43.606691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.690 [2024-07-12 20:36:43.606732] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:49.690 [2024-07-12 20:36:43.606759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.606990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:49.690 [2024-07-12 20:36:43.607961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.607973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.607988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:49.691 [2024-07-12 20:36:43.608184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:49.691 [2024-07-12 20:36:43.608197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bb011f9-c31c-4127-9322-4bfe0622a87b 00:19:49.691 [2024-07-12 20:36:43.608213] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:49.691 [2024-07-12 20:36:43.608224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:49.691 [2024-07-12 20:36:43.608251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:49.691 [2024-07-12 20:36:43.608265] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:49.691 [2024-07-12 20:36:43.608283] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:49.691 [2024-07-12 20:36:43.608295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:49.691 [2024-07-12 20:36:43.608309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:49.691 [2024-07-12 20:36:43.608320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:49.691 [2024-07-12 20:36:43.608332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:49.691 [2024-07-12 20:36:43.608344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.691 [2024-07-12 20:36:43.608358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:49.691 [2024-07-12 20:36:43.608370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:19:49.691 [2024-07-12 20:36:43.608383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.610492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.691 [2024-07-12 20:36:43.610528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:49.691 [2024-07-12 20:36:43.610545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:19:49.691 [2024-07-12 20:36:43.610559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.610717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.691 [2024-07-12 20:36:43.610741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:49.691 [2024-07-12 20:36:43.610754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:49.691 [2024-07-12 20:36:43.610768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.618946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.619115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.691 [2024-07-12 20:36:43.619253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.619394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.619507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.619559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.691 [2024-07-12 20:36:43.619664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.619770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.619975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.620105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.691 [2024-07-12 20:36:43.620210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.620293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.620397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.620506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.691 [2024-07-12 20:36:43.620609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.620663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.633216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.633480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.691 [2024-07-12 20:36:43.633615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.633763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.643626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.643809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.691 [2024-07-12 20:36:43.643929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.644056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.644199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.644295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.691 [2024-07-12 20:36:43.644400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.644453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.644615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.644672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.691 [2024-07-12 20:36:43.644713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.644827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.644978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.645035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.691 [2024-07-12 20:36:43.645134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.645281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.645389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.645466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:49.691 [2024-07-12 20:36:43.645563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.645615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.645751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.645805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.691 [2024-07-12 20:36:43.645853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.645894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.646027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.691 [2024-07-12 20:36:43.646080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.691 [2024-07-12 20:36:43.646098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.691 [2024-07-12 20:36:43.646112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.691 [2024-07-12 20:36:43.646293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.132 ms, result 0 00:19:49.691 true 00:19:49.691 20:36:43 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 93018 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 93018 ']' 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 93018 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93018 00:19:49.691 killing process with pid 93018 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93018' 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 93018 00:19:49.691 20:36:43 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 93018 00:19:52.982 20:36:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:58.281 262144+0 records in 00:19:58.281 262144+0 records out 00:19:58.281 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.79628 s, 224 MB/s 00:19:58.281 20:36:51 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:59.708 20:36:53 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:59.968 [2024-07-12 20:36:53.909143] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:19:59.968 [2024-07-12 20:36:53.909479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93238 ] 00:19:59.968 [2024-07-12 20:36:54.059132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:59.968 [2024-07-12 20:36:54.082745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.226 [2024-07-12 20:36:54.187216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.226 [2024-07-12 20:36:54.320869] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.226 [2024-07-12 20:36:54.320978] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.488 [2024-07-12 20:36:54.483097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.483171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.488 [2024-07-12 20:36:54.483204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.488 [2024-07-12 20:36:54.483218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.483326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.483357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.488 [2024-07-12 20:36:54.483384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:00.488 [2024-07-12 20:36:54.483396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.483429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.488 [2024-07-12 20:36:54.483793] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.488 [2024-07-12 20:36:54.483841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.483859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.488 [2024-07-12 20:36:54.483879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:20:00.488 [2024-07-12 20:36:54.483892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.485872] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.488 [2024-07-12 20:36:54.488837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.488884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.488 [2024-07-12 20:36:54.488918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:20:00.488 [2024-07-12 20:36:54.488930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.489004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.489024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.488 [2024-07-12 20:36:54.489037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:00.488 [2024-07-12 20:36:54.489053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.497872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.497922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.488 [2024-07-12 20:36:54.497955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.748 ms 00:20:00.488 [2024-07-12 20:36:54.497967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.498074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.498094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.488 [2024-07-12 20:36:54.498113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:00.488 [2024-07-12 20:36:54.498125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.498210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.498235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.488 [2024-07-12 20:36:54.498249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:00.488 [2024-07-12 20:36:54.498303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.498363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.488 [2024-07-12 20:36:54.500552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.500599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.488 [2024-07-12 20:36:54.500615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:20:00.488 [2024-07-12 20:36:54.500628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.500684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.500701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.488 [2024-07-12 20:36:54.500714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:00.488 [2024-07-12 20:36:54.500725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.500764] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.488 [2024-07-12 20:36:54.500795] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.488 [2024-07-12 20:36:54.500841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.488 [2024-07-12 20:36:54.500867] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:00.488 [2024-07-12 20:36:54.500976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.488 [2024-07-12 20:36:54.500992] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.488 [2024-07-12 20:36:54.501008] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:00.488 [2024-07-12 20:36:54.501023] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.488 [2024-07-12 20:36:54.501038] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.488 [2024-07-12 20:36:54.501050] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:00.488 [2024-07-12 20:36:54.501062] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.488 [2024-07-12 20:36:54.501074] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.488 [2024-07-12 20:36:54.501098] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.488 [2024-07-12 20:36:54.501111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.488 [2024-07-12 20:36:54.501127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.488 [2024-07-12 20:36:54.501140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:20:00.488 [2024-07-12 20:36:54.501151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.488 [2024-07-12 20:36:54.501303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.501323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.489 [2024-07-12 20:36:54.501336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:20:00.489 [2024-07-12 20:36:54.501347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.501458] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.489 [2024-07-12 20:36:54.501488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.489 [2024-07-12 20:36:54.501506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.489 [2024-07-12 20:36:54.501527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.489 [2024-07-12 20:36:54.501551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:00.489 [2024-07-12 20:36:54.501575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.489 [2024-07-12 20:36:54.501587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.489 [2024-07-12 20:36:54.501610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.489 [2024-07-12 20:36:54.501621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:00.489 [2024-07-12 20:36:54.501638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.489 [2024-07-12 20:36:54.501650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.489 [2024-07-12 20:36:54.501662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:00.489 [2024-07-12 20:36:54.501685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.489 [2024-07-12 20:36:54.501709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:00.489 [2024-07-12 20:36:54.501721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.489 [2024-07-12 20:36:54.501745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.489 [2024-07-12 20:36:54.501768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.489 [2024-07-12 20:36:54.501779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.489 [2024-07-12 20:36:54.501801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.489 [2024-07-12 20:36:54.501813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.489 [2024-07-12 20:36:54.501842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.489 [2024-07-12 20:36:54.501855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.489 [2024-07-12 20:36:54.501878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.489 [2024-07-12 20:36:54.501889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.489 [2024-07-12 20:36:54.501912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.489 [2024-07-12 20:36:54.501923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:00.489 [2024-07-12 20:36:54.501934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.489 [2024-07-12 20:36:54.501946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.489 [2024-07-12 20:36:54.501958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:00.489 [2024-07-12 20:36:54.501969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.489 [2024-07-12 20:36:54.501984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.489 [2024-07-12 20:36:54.501995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:00.489 [2024-07-12 20:36:54.502007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.489 [2024-07-12 20:36:54.502018] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.489 [2024-07-12 20:36:54.502034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.489 [2024-07-12 20:36:54.502054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.489 [2024-07-12 20:36:54.502066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.489 [2024-07-12 20:36:54.502079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.489 [2024-07-12 20:36:54.502091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.489 [2024-07-12 20:36:54.502104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.489 [2024-07-12 20:36:54.502116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.489 [2024-07-12 20:36:54.502127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.489 [2024-07-12 20:36:54.502139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.489 [2024-07-12 20:36:54.502152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.489 [2024-07-12 20:36:54.502167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.489 [2024-07-12 20:36:54.502180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:00.489 [2024-07-12 20:36:54.502192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:00.489 [2024-07-12 20:36:54.502205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:00.489 [2024-07-12 20:36:54.502217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:00.489 [2024-07-12 20:36:54.502230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:00.489 [2024-07-12 20:36:54.502246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:00.489 [2024-07-12 20:36:54.502642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:00.489 [2024-07-12 20:36:54.502818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:00.489 [2024-07-12 20:36:54.502957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:00.489 [2024-07-12 20:36:54.503136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:00.489 [2024-07-12 20:36:54.503309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:00.489 [2024-07-12 20:36:54.503453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:00.489 [2024-07-12 20:36:54.503534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:00.489 [2024-07-12 20:36:54.503709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:00.489 [2024-07-12 20:36:54.503873] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.489 [2024-07-12 20:36:54.503963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.489 [2024-07-12 20:36:54.504161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.489 [2024-07-12 20:36:54.504260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.489 [2024-07-12 20:36:54.504428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.489 [2024-07-12 20:36:54.504588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.489 [2024-07-12 20:36:54.504617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.504641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.489 [2024-07-12 20:36:54.504662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:20:00.489 [2024-07-12 20:36:54.504679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.532847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.533205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.489 [2024-07-12 20:36:54.533450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.059 ms 00:20:00.489 [2024-07-12 20:36:54.533546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.533819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.533919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.489 [2024-07-12 20:36:54.534095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:00.489 [2024-07-12 20:36:54.534295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.547457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.547666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.489 [2024-07-12 20:36:54.547823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.854 ms 00:20:00.489 [2024-07-12 20:36:54.547888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.548071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.548297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.489 [2024-07-12 20:36:54.548440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.489 [2024-07-12 20:36:54.548518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.549303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.549453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.489 [2024-07-12 20:36:54.549579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:20:00.489 [2024-07-12 20:36:54.549639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.549991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.550143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.489 [2024-07-12 20:36:54.550288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:20:00.489 [2024-07-12 20:36:54.550421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-07-12 20:36:54.558133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-07-12 20:36:54.558319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.490 [2024-07-12 20:36:54.558467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.624 ms 00:20:00.490 [2024-07-12 20:36:54.558529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.561778] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:00.490 [2024-07-12 20:36:54.561982] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.490 [2024-07-12 20:36:54.562033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.562050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.490 [2024-07-12 20:36:54.562068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:20:00.490 [2024-07-12 20:36:54.562080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.577861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.577926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.490 [2024-07-12 20:36:54.577945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.704 ms 00:20:00.490 [2024-07-12 20:36:54.577979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.580039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.580082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.490 [2024-07-12 20:36:54.580099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:20:00.490 [2024-07-12 20:36:54.580110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.581717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.581756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.490 [2024-07-12 20:36:54.581805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:20:00.490 [2024-07-12 20:36:54.581816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.582273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.582305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.490 [2024-07-12 20:36:54.582321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:20:00.490 [2024-07-12 20:36:54.582348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.603674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.603748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.490 [2024-07-12 20:36:54.603786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.299 ms 00:20:00.490 [2024-07-12 20:36:54.603816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.612233] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:00.490 [2024-07-12 20:36:54.616673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.616718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.490 [2024-07-12 20:36:54.616754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.774 ms 00:20:00.490 [2024-07-12 20:36:54.616766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.616923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.616961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.490 [2024-07-12 20:36:54.616975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:00.490 [2024-07-12 20:36:54.616991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.617084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.617108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.490 [2024-07-12 20:36:54.617121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:00.490 [2024-07-12 20:36:54.617132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.617164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.617178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.490 [2024-07-12 20:36:54.617191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:00.490 [2024-07-12 20:36:54.617201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.617243] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.490 [2024-07-12 20:36:54.617259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.617294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.490 [2024-07-12 20:36:54.617318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:00.490 [2024-07-12 20:36:54.617329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.621668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.621713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.490 [2024-07-12 20:36:54.621747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.313 ms 00:20:00.490 [2024-07-12 20:36:54.621759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.621841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.490 [2024-07-12 20:36:54.621875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.490 [2024-07-12 20:36:54.621896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:00.490 [2024-07-12 20:36:54.621914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.490 [2024-07-12 20:36:54.623374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.691 ms, result 0 00:20:37.886  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (28 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 163/1024 [MB] (27 MBps) Copying: 190/1024 [MB] (26 MBps) Copying: 217/1024 [MB] (26 MBps) Copying: 243/1024 [MB] (26 MBps) Copying: 272/1024 [MB] (28 MBps) Copying: 299/1024 [MB] (27 MBps) Copying: 326/1024 [MB] (27 MBps) Copying: 353/1024 [MB] (27 MBps) Copying: 380/1024 [MB] (26 MBps) Copying: 408/1024 [MB] (28 MBps) Copying: 439/1024 [MB] (30 MBps) Copying: 467/1024 [MB] (28 MBps) Copying: 493/1024 [MB] (26 MBps) Copying: 519/1024 [MB] (25 MBps) Copying: 549/1024 [MB] (29 MBps) Copying: 578/1024 [MB] (29 MBps) Copying: 607/1024 [MB] (28 MBps) Copying: 635/1024 [MB] (27 MBps) Copying: 660/1024 [MB] (25 MBps) Copying: 685/1024 [MB] (25 MBps) Copying: 712/1024 [MB] (27 MBps) Copying: 741/1024 [MB] (28 MBps) Copying: 769/1024 [MB] (27 MBps) Copying: 796/1024 [MB] (27 MBps) Copying: 824/1024 [MB] (27 MBps) Copying: 852/1024 [MB] (28 MBps) Copying: 880/1024 [MB] (28 MBps) Copying: 908/1024 [MB] (28 MBps) Copying: 936/1024 [MB] (27 MBps) Copying: 964/1024 [MB] (27 MBps) Copying: 991/1024 [MB] (26 MBps) Copying: 1018/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-12 20:37:31.832061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.886 [2024-07-12 20:37:31.832125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.886 [2024-07-12 20:37:31.832148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.886 [2024-07-12 20:37:31.832161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.832191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.887 [2024-07-12 20:37:31.833049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.833078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.887 [2024-07-12 20:37:31.833093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:20:37.887 [2024-07-12 20:37:31.833105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.834709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.834751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.887 [2024-07-12 20:37:31.834775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:20:37.887 [2024-07-12 20:37:31.834804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.850429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.850471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.887 [2024-07-12 20:37:31.850489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.602 ms 00:20:37.887 [2024-07-12 20:37:31.850500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.856984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.857019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.887 [2024-07-12 20:37:31.857035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.445 ms 00:20:37.887 [2024-07-12 20:37:31.857053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.858308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.858347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.887 [2024-07-12 20:37:31.858362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:20:37.887 [2024-07-12 20:37:31.858374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.862155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.862198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.887 [2024-07-12 20:37:31.862215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:20:37.887 [2024-07-12 20:37:31.862227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.862367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.862387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.887 [2024-07-12 20:37:31.862406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:37.887 [2024-07-12 20:37:31.862418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.864359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.864397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:37.887 [2024-07-12 20:37:31.864413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:20:37.887 [2024-07-12 20:37:31.864424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.866058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.866095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:37.887 [2024-07-12 20:37:31.866110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.598 ms 00:20:37.887 [2024-07-12 20:37:31.866121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.867275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.867321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.887 [2024-07-12 20:37:31.867336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:20:37.887 [2024-07-12 20:37:31.867348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.868390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.887 [2024-07-12 20:37:31.868432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.887 [2024-07-12 20:37:31.868448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:20:37.887 [2024-07-12 20:37:31.868474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.887 [2024-07-12 20:37:31.868510] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.887 [2024-07-12 20:37:31.868533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.868995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.887 [2024-07-12 20:37:31.869221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-12 20:37:31.869832] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.888 [2024-07-12 20:37:31.869844] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bb011f9-c31c-4127-9322-4bfe0622a87b 00:20:37.888 [2024-07-12 20:37:31.869857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.888 [2024-07-12 20:37:31.869880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.888 [2024-07-12 20:37:31.869892] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.888 [2024-07-12 20:37:31.869904] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.888 [2024-07-12 20:37:31.869915] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.888 [2024-07-12 20:37:31.869932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.888 [2024-07-12 20:37:31.869943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.888 [2024-07-12 20:37:31.869955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.888 [2024-07-12 20:37:31.869966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.888 [2024-07-12 20:37:31.869977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-12 20:37:31.869989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.888 [2024-07-12 20:37:31.870001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:20:37.888 [2024-07-12 20:37:31.870013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.872077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-12 20:37:31.872108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.888 [2024-07-12 20:37:31.872124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.042 ms 00:20:37.888 [2024-07-12 20:37:31.872142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.872296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-12 20:37:31.872315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.888 [2024-07-12 20:37:31.872328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:37.888 [2024-07-12 20:37:31.872339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.879377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.879419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.888 [2024-07-12 20:37:31.879437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.879466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.879540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.879556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.888 [2024-07-12 20:37:31.879568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.879580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.879632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.879650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.888 [2024-07-12 20:37:31.879663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.879674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.879704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.879718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.888 [2024-07-12 20:37:31.879730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.879741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.892497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.892558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.888 [2024-07-12 20:37:31.892576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.892597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.902515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.902569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.888 [2024-07-12 20:37:31.902602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.902614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.902695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.902713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.888 [2024-07-12 20:37:31.902726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.902738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.902792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.902808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.888 [2024-07-12 20:37:31.902821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.902833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.902924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.902943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.888 [2024-07-12 20:37:31.902956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.902968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.903026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.888 [2024-07-12 20:37:31.903067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.888 [2024-07-12 20:37:31.903080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.888 [2024-07-12 20:37:31.903092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-12 20:37:31.903140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.889 [2024-07-12 20:37:31.903166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.889 [2024-07-12 20:37:31.903179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.889 [2024-07-12 20:37:31.903191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-12 20:37:31.903272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.889 [2024-07-12 20:37:31.903292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.889 [2024-07-12 20:37:31.903305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.889 [2024-07-12 20:37:31.903316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-12 20:37:31.903474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.365 ms, result 0 00:20:38.821 00:20:38.821 00:20:38.821 20:37:32 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:38.821 [2024-07-12 20:37:32.726994] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:20:38.821 [2024-07-12 20:37:32.727184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93629 ] 00:20:38.821 [2024-07-12 20:37:32.868535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:38.821 [2024-07-12 20:37:32.891127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.821 [2024-07-12 20:37:32.965194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.079 [2024-07-12 20:37:33.086437] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.079 [2024-07-12 20:37:33.086527] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.338 [2024-07-12 20:37:33.245216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.245285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.338 [2024-07-12 20:37:33.245307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.338 [2024-07-12 20:37:33.245319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.245404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.245425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.338 [2024-07-12 20:37:33.245442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:39.338 [2024-07-12 20:37:33.245454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.245493] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.338 [2024-07-12 20:37:33.245774] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.338 [2024-07-12 20:37:33.245800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.245813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.338 [2024-07-12 20:37:33.245829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:39.338 [2024-07-12 20:37:33.245846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.247705] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:39.338 [2024-07-12 20:37:33.250533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.250586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:39.338 [2024-07-12 20:37:33.250603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.830 ms 00:20:39.338 [2024-07-12 20:37:33.250615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.250683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.250702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:39.338 [2024-07-12 20:37:33.250715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:39.338 [2024-07-12 20:37:33.250726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.259378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.259535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.338 [2024-07-12 20:37:33.259660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.583 ms 00:20:39.338 [2024-07-12 20:37:33.259802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.259948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.260005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.338 [2024-07-12 20:37:33.260117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:39.338 [2024-07-12 20:37:33.260259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.260404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.260497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.338 [2024-07-12 20:37:33.260530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:39.338 [2024-07-12 20:37:33.260558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.260601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.338 [2024-07-12 20:37:33.262671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.262718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.338 [2024-07-12 20:37:33.262734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:20:39.338 [2024-07-12 20:37:33.262757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.262809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.262826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.338 [2024-07-12 20:37:33.262838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:39.338 [2024-07-12 20:37:33.262860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.262899] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:39.338 [2024-07-12 20:37:33.262939] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:39.338 [2024-07-12 20:37:33.262991] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:39.338 [2024-07-12 20:37:33.263032] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:39.338 [2024-07-12 20:37:33.263139] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.338 [2024-07-12 20:37:33.263165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.338 [2024-07-12 20:37:33.263180] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:39.338 [2024-07-12 20:37:33.263203] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.338 [2024-07-12 20:37:33.263217] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.338 [2024-07-12 20:37:33.263237] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:39.338 [2024-07-12 20:37:33.263447] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.338 [2024-07-12 20:37:33.263572] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.338 [2024-07-12 20:37:33.263620] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.338 [2024-07-12 20:37:33.263722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.263826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.338 [2024-07-12 20:37:33.263966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:20:39.338 [2024-07-12 20:37:33.264015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.264192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.338 [2024-07-12 20:37:33.264221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.338 [2024-07-12 20:37:33.264235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:39.338 [2024-07-12 20:37:33.264278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.338 [2024-07-12 20:37:33.264417] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.338 [2024-07-12 20:37:33.264448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.338 [2024-07-12 20:37:33.264466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.338 [2024-07-12 20:37:33.264477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.338 [2024-07-12 20:37:33.264497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.339 [2024-07-12 20:37:33.264507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:39.339 [2024-07-12 20:37:33.264528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.339 [2024-07-12 20:37:33.264538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.339 [2024-07-12 20:37:33.264558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.339 [2024-07-12 20:37:33.264568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:39.339 [2024-07-12 20:37:33.264578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.339 [2024-07-12 20:37:33.264592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.339 [2024-07-12 20:37:33.264604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:39.339 [2024-07-12 20:37:33.264625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.339 [2024-07-12 20:37:33.264647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:39.339 [2024-07-12 20:37:33.264667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.339 [2024-07-12 20:37:33.264687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.339 [2024-07-12 20:37:33.264707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.339 [2024-07-12 20:37:33.264718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.339 [2024-07-12 20:37:33.264738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.339 [2024-07-12 20:37:33.264748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.339 [2024-07-12 20:37:33.264768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.339 [2024-07-12 20:37:33.264785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.339 [2024-07-12 20:37:33.264806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.339 [2024-07-12 20:37:33.264816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.339 [2024-07-12 20:37:33.264838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.339 [2024-07-12 20:37:33.264848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:39.339 [2024-07-12 20:37:33.264858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.339 [2024-07-12 20:37:33.264868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.339 [2024-07-12 20:37:33.264878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:39.339 [2024-07-12 20:37:33.264888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.339 [2024-07-12 20:37:33.264911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:39.339 [2024-07-12 20:37:33.264921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264931] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.339 [2024-07-12 20:37:33.264945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.339 [2024-07-12 20:37:33.264960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.339 [2024-07-12 20:37:33.264972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.339 [2024-07-12 20:37:33.264984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.339 [2024-07-12 20:37:33.264995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.339 [2024-07-12 20:37:33.265006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.339 [2024-07-12 20:37:33.265017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.339 [2024-07-12 20:37:33.265027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.339 [2024-07-12 20:37:33.265037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.339 [2024-07-12 20:37:33.265050] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.339 [2024-07-12 20:37:33.265064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.339 [2024-07-12 20:37:33.265076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:39.339 [2024-07-12 20:37:33.265088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:39.339 [2024-07-12 20:37:33.265100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:39.339 [2024-07-12 20:37:33.265111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:39.339 [2024-07-12 20:37:33.265122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:39.339 [2024-07-12 20:37:33.265133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:39.339 [2024-07-12 20:37:33.265148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:39.339 [2024-07-12 20:37:33.265160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:39.339 [2024-07-12 20:37:33.265171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:39.339 [2024-07-12 20:37:33.265183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:39.339 [2024-07-12 20:37:33.265195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:39.339 [2024-07-12 20:37:33.265206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:39.339 [2024-07-12 20:37:33.265217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:39.339 [2024-07-12 20:37:33.265229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:39.339 [2024-07-12 20:37:33.265254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.339 [2024-07-12 20:37:33.265269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.339 [2024-07-12 20:37:33.265281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.339 [2024-07-12 20:37:33.265293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.339 [2024-07-12 20:37:33.265304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.339 [2024-07-12 20:37:33.265316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.339 [2024-07-12 20:37:33.265328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.265348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.339 [2024-07-12 20:37:33.265364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:20:39.339 [2024-07-12 20:37:33.265377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.288483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.288543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.339 [2024-07-12 20:37:33.288563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.040 ms 00:20:39.339 [2024-07-12 20:37:33.288576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.288700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.288717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.339 [2024-07-12 20:37:33.288734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:39.339 [2024-07-12 20:37:33.288746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.301182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.301235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.339 [2024-07-12 20:37:33.301270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.354 ms 00:20:39.339 [2024-07-12 20:37:33.301283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.301331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.301354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.339 [2024-07-12 20:37:33.301367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.339 [2024-07-12 20:37:33.301379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.301969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.302007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.339 [2024-07-12 20:37:33.302022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:39.339 [2024-07-12 20:37:33.302033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.302204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.302224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.339 [2024-07-12 20:37:33.302255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:20:39.339 [2024-07-12 20:37:33.302278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.309843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.309889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.339 [2024-07-12 20:37:33.309906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.528 ms 00:20:39.339 [2024-07-12 20:37:33.309930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.339 [2024-07-12 20:37:33.312952] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:39.339 [2024-07-12 20:37:33.312998] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:39.339 [2024-07-12 20:37:33.313027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.339 [2024-07-12 20:37:33.313044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:39.339 [2024-07-12 20:37:33.313058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:20:39.340 [2024-07-12 20:37:33.313069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.328977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.329020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:39.340 [2024-07-12 20:37:33.329052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.862 ms 00:20:39.340 [2024-07-12 20:37:33.329069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.330828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.330869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:39.340 [2024-07-12 20:37:33.330884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:20:39.340 [2024-07-12 20:37:33.330896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.332509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.332548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:39.340 [2024-07-12 20:37:33.332563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:20:39.340 [2024-07-12 20:37:33.332574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.332943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.332976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.340 [2024-07-12 20:37:33.332995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:20:39.340 [2024-07-12 20:37:33.333005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.354644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.354721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:39.340 [2024-07-12 20:37:33.354743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.614 ms 00:20:39.340 [2024-07-12 20:37:33.354755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.362925] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:39.340 [2024-07-12 20:37:33.365587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.365623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.340 [2024-07-12 20:37:33.365640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.766 ms 00:20:39.340 [2024-07-12 20:37:33.365651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.365743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.365766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:39.340 [2024-07-12 20:37:33.365783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:39.340 [2024-07-12 20:37:33.365794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.365894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.365923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.340 [2024-07-12 20:37:33.365945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:39.340 [2024-07-12 20:37:33.365956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.365990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.366017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.340 [2024-07-12 20:37:33.366029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:39.340 [2024-07-12 20:37:33.366040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.366083] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:39.340 [2024-07-12 20:37:33.366104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.366115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:39.340 [2024-07-12 20:37:33.366131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:39.340 [2024-07-12 20:37:33.366143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.370069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.370112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.340 [2024-07-12 20:37:33.370129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:20:39.340 [2024-07-12 20:37:33.370141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.370231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.340 [2024-07-12 20:37:33.370277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.340 [2024-07-12 20:37:33.370298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:39.340 [2024-07-12 20:37:33.370315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.340 [2024-07-12 20:37:33.371639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.912 ms, result 0 00:21:17.495  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 86/1024 [MB] (29 MBps) Copying: 115/1024 [MB] (29 MBps) Copying: 144/1024 [MB] (29 MBps) Copying: 173/1024 [MB] (29 MBps) Copying: 199/1024 [MB] (25 MBps) Copying: 225/1024 [MB] (25 MBps) Copying: 251/1024 [MB] (26 MBps) Copying: 276/1024 [MB] (25 MBps) Copying: 303/1024 [MB] (26 MBps) Copying: 330/1024 [MB] (26 MBps) Copying: 356/1024 [MB] (26 MBps) Copying: 383/1024 [MB] (27 MBps) Copying: 410/1024 [MB] (26 MBps) Copying: 437/1024 [MB] (27 MBps) Copying: 465/1024 [MB] (27 MBps) Copying: 492/1024 [MB] (27 MBps) Copying: 518/1024 [MB] (26 MBps) Copying: 546/1024 [MB] (27 MBps) Copying: 573/1024 [MB] (27 MBps) Copying: 600/1024 [MB] (26 MBps) Copying: 627/1024 [MB] (27 MBps) Copying: 653/1024 [MB] (26 MBps) Copying: 680/1024 [MB] (26 MBps) Copying: 709/1024 [MB] (29 MBps) Copying: 737/1024 [MB] (27 MBps) Copying: 766/1024 [MB] (28 MBps) Copying: 788/1024 [MB] (22 MBps) Copying: 813/1024 [MB] (25 MBps) Copying: 838/1024 [MB] (24 MBps) Copying: 865/1024 [MB] (26 MBps) Copying: 890/1024 [MB] (25 MBps) Copying: 917/1024 [MB] (26 MBps) Copying: 944/1024 [MB] (27 MBps) Copying: 971/1024 [MB] (27 MBps) Copying: 999/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-12 20:38:11.639610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.495 [2024-07-12 20:38:11.639715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:17.495 [2024-07-12 20:38:11.639764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:17.495 [2024-07-12 20:38:11.639797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.495 [2024-07-12 20:38:11.639859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:17.495 [2024-07-12 20:38:11.640862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.495 [2024-07-12 20:38:11.640907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:17.495 [2024-07-12 20:38:11.640947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:21:17.495 [2024-07-12 20:38:11.640967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.495 [2024-07-12 20:38:11.641379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.495 [2024-07-12 20:38:11.641415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:17.495 [2024-07-12 20:38:11.641435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:21:17.495 [2024-07-12 20:38:11.641454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.755 [2024-07-12 20:38:11.649065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.755 [2024-07-12 20:38:11.649132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:17.755 [2024-07-12 20:38:11.649178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.562 ms 00:21:17.755 [2024-07-12 20:38:11.649198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.755 [2024-07-12 20:38:11.658432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.755 [2024-07-12 20:38:11.658479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:17.755 [2024-07-12 20:38:11.658518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.170 ms 00:21:17.755 [2024-07-12 20:38:11.658532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.755 [2024-07-12 20:38:11.660597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.755 [2024-07-12 20:38:11.660648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:17.755 [2024-07-12 20:38:11.660669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:21:17.756 [2024-07-12 20:38:11.660682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.756 [2024-07-12 20:38:11.664310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.756 [2024-07-12 20:38:11.664372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:17.756 [2024-07-12 20:38:11.664402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:21:17.756 [2024-07-12 20:38:11.664417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.756 [2024-07-12 20:38:11.664592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.756 [2024-07-12 20:38:11.664620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:17.756 [2024-07-12 20:38:11.664636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:17.756 [2024-07-12 20:38:11.664663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.756 [2024-07-12 20:38:11.666948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.756 [2024-07-12 20:38:11.666998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:17.756 [2024-07-12 20:38:11.667017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:21:17.756 [2024-07-12 20:38:11.667031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.756 [2024-07-12 20:38:11.668677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.756 [2024-07-12 20:38:11.668735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:17.756 [2024-07-12 20:38:11.668777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:21:17.756 [2024-07-12 20:38:11.668791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.756 [2024-07-12 20:38:11.670189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.756 [2024-07-12 20:38:11.670233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:17.756 [2024-07-12 20:38:11.670271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:21:17.756 [2024-07-12 20:38:11.670285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.756 [2024-07-12 20:38:11.671431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.756 [2024-07-12 20:38:11.671483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:17.756 [2024-07-12 20:38:11.671541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:21:17.756 [2024-07-12 20:38:11.671564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.756 [2024-07-12 20:38:11.671622] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:17.756 [2024-07-12 20:38:11.671652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.671999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:17.756 [2024-07-12 20:38:11.672839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.672982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:17.757 [2024-07-12 20:38:11.673233] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:17.757 [2024-07-12 20:38:11.673260] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bb011f9-c31c-4127-9322-4bfe0622a87b 00:21:17.757 [2024-07-12 20:38:11.673275] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:17.757 [2024-07-12 20:38:11.673289] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:17.757 [2024-07-12 20:38:11.673302] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:17.757 [2024-07-12 20:38:11.673317] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:17.757 [2024-07-12 20:38:11.673338] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:17.757 [2024-07-12 20:38:11.673354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:17.757 [2024-07-12 20:38:11.673368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:17.757 [2024-07-12 20:38:11.673380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:17.757 [2024-07-12 20:38:11.673393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:17.757 [2024-07-12 20:38:11.673407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.757 [2024-07-12 20:38:11.673422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:17.757 [2024-07-12 20:38:11.673437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:21:17.757 [2024-07-12 20:38:11.673464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.675767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.757 [2024-07-12 20:38:11.675801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:17.757 [2024-07-12 20:38:11.675837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:21:17.757 [2024-07-12 20:38:11.675852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.676001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.757 [2024-07-12 20:38:11.676037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:17.757 [2024-07-12 20:38:11.676070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:17.757 [2024-07-12 20:38:11.676085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.684125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.684332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.757 [2024-07-12 20:38:11.684533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.684662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.684858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.684985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:17.757 [2024-07-12 20:38:11.685112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.685184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.685392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.685468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:17.757 [2024-07-12 20:38:11.685583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.685717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.685888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.686014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:17.757 [2024-07-12 20:38:11.686134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.686270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.701548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.701835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:17.757 [2024-07-12 20:38:11.701991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.702128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.713388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.713669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:17.757 [2024-07-12 20:38:11.713817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.713876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.714072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.714203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.757 [2024-07-12 20:38:11.714358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.714386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.714462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.714482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.757 [2024-07-12 20:38:11.714497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.714510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.714621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.714643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.757 [2024-07-12 20:38:11.714658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.714673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.714732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.714753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:17.757 [2024-07-12 20:38:11.714769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.714782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.714867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.714888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.757 [2024-07-12 20:38:11.714903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.714917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.714983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.757 [2024-07-12 20:38:11.715003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.757 [2024-07-12 20:38:11.715029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.757 [2024-07-12 20:38:11.715071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.757 [2024-07-12 20:38:11.715426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.627 ms, result 0 00:21:18.016 00:21:18.016 00:21:18.016 20:38:12 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:20.564 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:20.564 20:38:14 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:20.564 [2024-07-12 20:38:14.389584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:20.564 [2024-07-12 20:38:14.389759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94050 ] 00:21:20.564 [2024-07-12 20:38:14.533916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:20.564 [2024-07-12 20:38:14.554622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.564 [2024-07-12 20:38:14.655415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.823 [2024-07-12 20:38:14.789491] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.823 [2024-07-12 20:38:14.789607] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.823 [2024-07-12 20:38:14.956224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.823 [2024-07-12 20:38:14.956326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.823 [2024-07-12 20:38:14.956353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:20.823 [2024-07-12 20:38:14.956371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.823 [2024-07-12 20:38:14.956475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.823 [2024-07-12 20:38:14.956501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.823 [2024-07-12 20:38:14.956522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:20.823 [2024-07-12 20:38:14.956538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.823 [2024-07-12 20:38:14.956597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.823 [2024-07-12 20:38:14.957006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.823 [2024-07-12 20:38:14.957037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.823 [2024-07-12 20:38:14.957053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.823 [2024-07-12 20:38:14.957073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:21:20.823 [2024-07-12 20:38:14.957100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.823 [2024-07-12 20:38:14.959182] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:20.823 [2024-07-12 20:38:14.962351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.823 [2024-07-12 20:38:14.962410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:20.823 [2024-07-12 20:38:14.962439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:21:20.823 [2024-07-12 20:38:14.962469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.823 [2024-07-12 20:38:14.962565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.823 [2024-07-12 20:38:14.962588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:20.823 [2024-07-12 20:38:14.962605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:20.823 [2024-07-12 20:38:14.962625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.083 [2024-07-12 20:38:14.971725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.083 [2024-07-12 20:38:14.971798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:21.083 [2024-07-12 20:38:14.971821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.000 ms 00:21:21.083 [2024-07-12 20:38:14.971838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.083 [2024-07-12 20:38:14.971978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.083 [2024-07-12 20:38:14.972003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:21.084 [2024-07-12 20:38:14.972026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:21.084 [2024-07-12 20:38:14.972054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.084 [2024-07-12 20:38:14.972177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.084 [2024-07-12 20:38:14.972208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:21.084 [2024-07-12 20:38:14.972268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:21.084 [2024-07-12 20:38:14.972290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.084 [2024-07-12 20:38:14.972374] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:21.084 [2024-07-12 20:38:14.974649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.084 [2024-07-12 20:38:14.974707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:21.084 [2024-07-12 20:38:14.974728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:21:21.084 [2024-07-12 20:38:14.974757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.084 [2024-07-12 20:38:14.974832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.084 [2024-07-12 20:38:14.974852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:21.084 [2024-07-12 20:38:14.974882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:21.084 [2024-07-12 20:38:14.974897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.084 [2024-07-12 20:38:14.974955] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:21.084 [2024-07-12 20:38:14.974993] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:21.084 [2024-07-12 20:38:14.975068] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:21.084 [2024-07-12 20:38:14.975105] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:21.084 [2024-07-12 20:38:14.975261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:21.084 [2024-07-12 20:38:14.975286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:21.084 [2024-07-12 20:38:14.975306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:21.084 [2024-07-12 20:38:14.975325] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:21.084 [2024-07-12 20:38:14.975342] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:21.084 [2024-07-12 20:38:14.975359] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:21.084 [2024-07-12 20:38:14.975373] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:21.084 [2024-07-12 20:38:14.975388] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:21.084 [2024-07-12 20:38:14.975402] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:21.084 [2024-07-12 20:38:14.975417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.084 [2024-07-12 20:38:14.975439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:21.084 [2024-07-12 20:38:14.975455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:21:21.084 [2024-07-12 20:38:14.975469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.084 [2024-07-12 20:38:14.975596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.084 [2024-07-12 20:38:14.975617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:21.084 [2024-07-12 20:38:14.975634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:21.084 [2024-07-12 20:38:14.975661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.084 [2024-07-12 20:38:14.975799] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:21.084 [2024-07-12 20:38:14.975822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:21.084 [2024-07-12 20:38:14.975846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:21.084 [2024-07-12 20:38:14.975861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.084 [2024-07-12 20:38:14.975877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:21.084 [2024-07-12 20:38:14.975891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:21.084 [2024-07-12 20:38:14.975906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:21.084 [2024-07-12 20:38:14.975921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:21.084 [2024-07-12 20:38:14.975935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:21.084 [2024-07-12 20:38:14.975950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:21.084 [2024-07-12 20:38:14.975965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:21.084 [2024-07-12 20:38:14.975979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:21.084 [2024-07-12 20:38:14.975999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:21.084 [2024-07-12 20:38:14.976015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:21.084 [2024-07-12 20:38:14.976030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:21.084 [2024-07-12 20:38:14.976058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.084 [2024-07-12 20:38:14.976072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:21.084 [2024-07-12 20:38:14.976086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:21.084 [2024-07-12 20:38:14.976100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.084 [2024-07-12 20:38:14.976115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:21.084 [2024-07-12 20:38:14.976130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:21.084 [2024-07-12 20:38:14.976144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:21.084 [2024-07-12 20:38:14.976158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:21.084 [2024-07-12 20:38:14.976172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:21.084 [2024-07-12 20:38:14.976186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:21.084 [2024-07-12 20:38:14.976200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:21.084 [2024-07-12 20:38:14.976214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:21.084 [2024-07-12 20:38:14.976227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:21.084 [2024-07-12 20:38:14.976595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:21.084 [2024-07-12 20:38:14.976670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:21.084 [2024-07-12 20:38:14.976725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:21.084 [2024-07-12 20:38:14.976889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:21.084 [2024-07-12 20:38:14.976952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:21.084 [2024-07-12 20:38:14.977016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:21.084 [2024-07-12 20:38:14.977097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:21.084 [2024-07-12 20:38:14.977162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:21.084 [2024-07-12 20:38:14.977210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:21.084 [2024-07-12 20:38:14.977295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:21.084 [2024-07-12 20:38:14.977351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:21.084 [2024-07-12 20:38:14.977459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.084 [2024-07-12 20:38:14.977600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:21.084 [2024-07-12 20:38:14.977665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:21.084 [2024-07-12 20:38:14.977854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.084 [2024-07-12 20:38:14.977915] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:21.084 [2024-07-12 20:38:14.977974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:21.085 [2024-07-12 20:38:14.978086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:21.085 [2024-07-12 20:38:14.978159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.085 [2024-07-12 20:38:14.978208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:21.085 [2024-07-12 20:38:14.978281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:21.085 [2024-07-12 20:38:14.978334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:21.085 [2024-07-12 20:38:14.978383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:21.085 [2024-07-12 20:38:14.978430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:21.085 [2024-07-12 20:38:14.978482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:21.085 [2024-07-12 20:38:14.978533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:21.085 [2024-07-12 20:38:14.978662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:21.085 [2024-07-12 20:38:14.978687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:21.085 [2024-07-12 20:38:14.978703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:21.085 [2024-07-12 20:38:14.978718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:21.085 [2024-07-12 20:38:14.978733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:21.085 [2024-07-12 20:38:14.978748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:21.085 [2024-07-12 20:38:14.978770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:21.085 [2024-07-12 20:38:14.978787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:21.085 [2024-07-12 20:38:14.978802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:21.085 [2024-07-12 20:38:14.978817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:21.085 [2024-07-12 20:38:14.978832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:21.085 [2024-07-12 20:38:14.978847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:21.085 [2024-07-12 20:38:14.978862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:21.085 [2024-07-12 20:38:14.978877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:21.085 [2024-07-12 20:38:14.978892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:21.085 [2024-07-12 20:38:14.978908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:21.085 [2024-07-12 20:38:14.978925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:21.085 [2024-07-12 20:38:14.978940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:21.085 [2024-07-12 20:38:14.978956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:21.085 [2024-07-12 20:38:14.978971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:21.085 [2024-07-12 20:38:14.978986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:21.085 [2024-07-12 20:38:14.979004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:14.979028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:21.085 [2024-07-12 20:38:14.979045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:21:21.085 [2024-07-12 20:38:14.979101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.004734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.004810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.085 [2024-07-12 20:38:15.004837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:21:21.085 [2024-07-12 20:38:15.004861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.005015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.005050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:21.085 [2024-07-12 20:38:15.005073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:21.085 [2024-07-12 20:38:15.005088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.019293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.019364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.085 [2024-07-12 20:38:15.019390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.089 ms 00:21:21.085 [2024-07-12 20:38:15.019408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.019500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.019530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.085 [2024-07-12 20:38:15.019559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:21.085 [2024-07-12 20:38:15.019574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.020239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.020295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.085 [2024-07-12 20:38:15.020325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:21:21.085 [2024-07-12 20:38:15.020341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.020562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.020587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.085 [2024-07-12 20:38:15.020609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:21:21.085 [2024-07-12 20:38:15.020634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.028892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.028962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.085 [2024-07-12 20:38:15.028985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.220 ms 00:21:21.085 [2024-07-12 20:38:15.029001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.032447] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:21.085 [2024-07-12 20:38:15.032502] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:21.085 [2024-07-12 20:38:15.032532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.032550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:21.085 [2024-07-12 20:38:15.032566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.368 ms 00:21:21.085 [2024-07-12 20:38:15.032580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.052539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.052640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:21.085 [2024-07-12 20:38:15.052689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.894 ms 00:21:21.085 [2024-07-12 20:38:15.052713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.055699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.055757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:21.085 [2024-07-12 20:38:15.055778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:21:21.085 [2024-07-12 20:38:15.055794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.057673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.057719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:21.085 [2024-07-12 20:38:15.057739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:21:21.085 [2024-07-12 20:38:15.057754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.085 [2024-07-12 20:38:15.058350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.085 [2024-07-12 20:38:15.058390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:21.086 [2024-07-12 20:38:15.058414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:21:21.086 [2024-07-12 20:38:15.058429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.082760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.082847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:21.086 [2024-07-12 20:38:15.082874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.284 ms 00:21:21.086 [2024-07-12 20:38:15.082892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.094034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:21.086 [2024-07-12 20:38:15.098894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.098941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:21.086 [2024-07-12 20:38:15.098987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.915 ms 00:21:21.086 [2024-07-12 20:38:15.099003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.099159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.099188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:21.086 [2024-07-12 20:38:15.099270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:21.086 [2024-07-12 20:38:15.099291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.099417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.099440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:21.086 [2024-07-12 20:38:15.099457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:21.086 [2024-07-12 20:38:15.099472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.099536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.099559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:21.086 [2024-07-12 20:38:15.099575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:21.086 [2024-07-12 20:38:15.099589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.099643] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:21.086 [2024-07-12 20:38:15.099669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.099684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:21.086 [2024-07-12 20:38:15.099703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:21.086 [2024-07-12 20:38:15.099719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.104394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.104446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:21.086 [2024-07-12 20:38:15.104469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.639 ms 00:21:21.086 [2024-07-12 20:38:15.104501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.104599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.086 [2024-07-12 20:38:15.104629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:21.086 [2024-07-12 20:38:15.104646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:21.086 [2024-07-12 20:38:15.104661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.086 [2024-07-12 20:38:15.106285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.345 ms, result 0 00:22:01.199  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (28 MBps) Copying: 137/1024 [MB] (27 MBps) Copying: 163/1024 [MB] (26 MBps) Copying: 188/1024 [MB] (24 MBps) Copying: 214/1024 [MB] (26 MBps) Copying: 240/1024 [MB] (25 MBps) Copying: 266/1024 [MB] (26 MBps) Copying: 293/1024 [MB] (26 MBps) Copying: 320/1024 [MB] (26 MBps) Copying: 346/1024 [MB] (25 MBps) Copying: 372/1024 [MB] (26 MBps) Copying: 398/1024 [MB] (26 MBps) Copying: 426/1024 [MB] (27 MBps) Copying: 453/1024 [MB] (27 MBps) Copying: 480/1024 [MB] (26 MBps) Copying: 506/1024 [MB] (26 MBps) Copying: 533/1024 [MB] (26 MBps) Copying: 559/1024 [MB] (26 MBps) Copying: 586/1024 [MB] (26 MBps) Copying: 613/1024 [MB] (27 MBps) Copying: 640/1024 [MB] (26 MBps) Copying: 667/1024 [MB] (27 MBps) Copying: 693/1024 [MB] (25 MBps) Copying: 719/1024 [MB] (26 MBps) Copying: 746/1024 [MB] (26 MBps) Copying: 772/1024 [MB] (25 MBps) Copying: 797/1024 [MB] (25 MBps) Copying: 823/1024 [MB] (26 MBps) Copying: 849/1024 [MB] (25 MBps) Copying: 875/1024 [MB] (25 MBps) Copying: 900/1024 [MB] (25 MBps) Copying: 926/1024 [MB] (26 MBps) Copying: 953/1024 [MB] (26 MBps) Copying: 979/1024 [MB] (25 MBps) Copying: 1006/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-12 20:38:54.992689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:54.992763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:01.199 [2024-07-12 20:38:54.992786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:01.199 [2024-07-12 20:38:54.992800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:54.995128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:01.199 [2024-07-12 20:38:54.999018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:54.999058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:01.199 [2024-07-12 20:38:54.999100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.843 ms 00:22:01.199 [2024-07-12 20:38:54.999131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.009785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.009837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:01.199 [2024-07-12 20:38:55.009872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.417 ms 00:22:01.199 [2024-07-12 20:38:55.009886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.031961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.032007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:01.199 [2024-07-12 20:38:55.032039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.049 ms 00:22:01.199 [2024-07-12 20:38:55.032064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.038516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.038550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:01.199 [2024-07-12 20:38:55.038581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.348 ms 00:22:01.199 [2024-07-12 20:38:55.038593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.040378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.040431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:01.199 [2024-07-12 20:38:55.040463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:22:01.199 [2024-07-12 20:38:55.040475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.043893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.043969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:01.199 [2024-07-12 20:38:55.044001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:22:01.199 [2024-07-12 20:38:55.044014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.146961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.147034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:01.199 [2024-07-12 20:38:55.147071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.902 ms 00:22:01.199 [2024-07-12 20:38:55.147112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.149646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.149688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:01.199 [2024-07-12 20:38:55.149719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:22:01.199 [2024-07-12 20:38:55.149745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.151200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.151255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:01.199 [2024-07-12 20:38:55.151272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:22:01.199 [2024-07-12 20:38:55.151285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.152491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.152559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:01.199 [2024-07-12 20:38:55.152573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:22:01.199 [2024-07-12 20:38:55.152585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.153650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.199 [2024-07-12 20:38:55.153704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:01.199 [2024-07-12 20:38:55.153761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:22:01.199 [2024-07-12 20:38:55.153773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.199 [2024-07-12 20:38:55.153870] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:01.199 [2024-07-12 20:38:55.153895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122112 / 261120 wr_cnt: 1 state: open 00:22:01.199 [2024-07-12 20:38:55.153910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.153923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.153936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.153949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.153961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.153973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.153986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.153999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:01.199 [2024-07-12 20:38:55.154498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.154988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:01.200 [2024-07-12 20:38:55.155338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:01.200 [2024-07-12 20:38:55.155356] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bb011f9-c31c-4127-9322-4bfe0622a87b 00:22:01.200 [2024-07-12 20:38:55.155369] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122112 00:22:01.200 [2024-07-12 20:38:55.155382] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123072 00:22:01.200 [2024-07-12 20:38:55.155393] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122112 00:22:01.200 [2024-07-12 20:38:55.155407] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:22:01.200 [2024-07-12 20:38:55.155418] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:01.200 [2024-07-12 20:38:55.155431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:01.200 [2024-07-12 20:38:55.155443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:01.200 [2024-07-12 20:38:55.155454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:01.200 [2024-07-12 20:38:55.155465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:01.200 [2024-07-12 20:38:55.155478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.200 [2024-07-12 20:38:55.155491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:01.200 [2024-07-12 20:38:55.155504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:22:01.200 [2024-07-12 20:38:55.155516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.157666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.200 [2024-07-12 20:38:55.157700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:01.200 [2024-07-12 20:38:55.157717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:22:01.200 [2024-07-12 20:38:55.157729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.157878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.200 [2024-07-12 20:38:55.157909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:01.200 [2024-07-12 20:38:55.157933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:22:01.200 [2024-07-12 20:38:55.157945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.165205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.200 [2024-07-12 20:38:55.165272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.200 [2024-07-12 20:38:55.165301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.200 [2024-07-12 20:38:55.165316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.165454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.200 [2024-07-12 20:38:55.165486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.200 [2024-07-12 20:38:55.165500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.200 [2024-07-12 20:38:55.165511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.165597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.200 [2024-07-12 20:38:55.165627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.200 [2024-07-12 20:38:55.165641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.200 [2024-07-12 20:38:55.165654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.165678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.200 [2024-07-12 20:38:55.165693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.200 [2024-07-12 20:38:55.165705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.200 [2024-07-12 20:38:55.165724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.178272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.200 [2024-07-12 20:38:55.178340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.200 [2024-07-12 20:38:55.178374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.200 [2024-07-12 20:38:55.178388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.200 [2024-07-12 20:38:55.188532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.201 [2024-07-12 20:38:55.188583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.201 [2024-07-12 20:38:55.188625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.201 [2024-07-12 20:38:55.188647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.201 [2024-07-12 20:38:55.188710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.201 [2024-07-12 20:38:55.188728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.201 [2024-07-12 20:38:55.188740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.201 [2024-07-12 20:38:55.188767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.201 [2024-07-12 20:38:55.188828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.201 [2024-07-12 20:38:55.188844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.201 [2024-07-12 20:38:55.188856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.201 [2024-07-12 20:38:55.188868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.201 [2024-07-12 20:38:55.188966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.201 [2024-07-12 20:38:55.188994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.201 [2024-07-12 20:38:55.189009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.201 [2024-07-12 20:38:55.189021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.201 [2024-07-12 20:38:55.189069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.201 [2024-07-12 20:38:55.189087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:01.201 [2024-07-12 20:38:55.189100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.201 [2024-07-12 20:38:55.189111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.201 [2024-07-12 20:38:55.189168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.201 [2024-07-12 20:38:55.189190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.201 [2024-07-12 20:38:55.189204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.201 [2024-07-12 20:38:55.189216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.201 [2024-07-12 20:38:55.189300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.201 [2024-07-12 20:38:55.189319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.201 [2024-07-12 20:38:55.189332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.201 [2024-07-12 20:38:55.189345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.201 [2024-07-12 20:38:55.189520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 199.679 ms, result 0 00:22:01.768 00:22:01.768 00:22:01.768 20:38:55 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:02.026 [2024-07-12 20:38:56.011619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:02.026 [2024-07-12 20:38:56.011832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94469 ] 00:22:02.026 [2024-07-12 20:38:56.163607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.285 [2024-07-12 20:38:56.184039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.285 [2024-07-12 20:38:56.272560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.285 [2024-07-12 20:38:56.397723] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.285 [2024-07-12 20:38:56.397812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.557 [2024-07-12 20:38:56.557699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.557815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.557 [2024-07-12 20:38:56.557835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.557 [2024-07-12 20:38:56.557857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.557933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.557953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.557 [2024-07-12 20:38:56.557970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:02.557 [2024-07-12 20:38:56.557981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.558010] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.557 [2024-07-12 20:38:56.558307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.557 [2024-07-12 20:38:56.558335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.558348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.557 [2024-07-12 20:38:56.558360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:22:02.557 [2024-07-12 20:38:56.558372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.560400] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.557 [2024-07-12 20:38:56.563447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.563506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.557 [2024-07-12 20:38:56.563523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:22:02.557 [2024-07-12 20:38:56.563546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.563631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.563662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.557 [2024-07-12 20:38:56.563691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:02.557 [2024-07-12 20:38:56.563702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.572561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.572611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.557 [2024-07-12 20:38:56.572626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.792 ms 00:22:02.557 [2024-07-12 20:38:56.572649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.572740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.572759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.557 [2024-07-12 20:38:56.572775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:02.557 [2024-07-12 20:38:56.572789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.572854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.572889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.557 [2024-07-12 20:38:56.572901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:02.557 [2024-07-12 20:38:56.572911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.572942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.557 [2024-07-12 20:38:56.574964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.575005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.557 [2024-07-12 20:38:56.575019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:22:02.557 [2024-07-12 20:38:56.575030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.575073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.575098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.557 [2024-07-12 20:38:56.575129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:02.557 [2024-07-12 20:38:56.575140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.575199] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.557 [2024-07-12 20:38:56.575231] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.557 [2024-07-12 20:38:56.575300] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.557 [2024-07-12 20:38:56.575348] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:02.557 [2024-07-12 20:38:56.575473] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.557 [2024-07-12 20:38:56.575489] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.557 [2024-07-12 20:38:56.575504] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:02.557 [2024-07-12 20:38:56.575519] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.557 [2024-07-12 20:38:56.575543] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.557 [2024-07-12 20:38:56.575556] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.557 [2024-07-12 20:38:56.575581] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.557 [2024-07-12 20:38:56.575600] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.557 [2024-07-12 20:38:56.575611] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.557 [2024-07-12 20:38:56.575623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.575640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.557 [2024-07-12 20:38:56.575655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:22:02.557 [2024-07-12 20:38:56.575680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.575766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.557 [2024-07-12 20:38:56.575780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.557 [2024-07-12 20:38:56.575791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:02.557 [2024-07-12 20:38:56.575802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.557 [2024-07-12 20:38:56.575895] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.557 [2024-07-12 20:38:56.575910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.557 [2024-07-12 20:38:56.575935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.557 [2024-07-12 20:38:56.575946] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.557 [2024-07-12 20:38:56.575957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.557 [2024-07-12 20:38:56.575967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.557 [2024-07-12 20:38:56.575977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.557 [2024-07-12 20:38:56.575988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.557 [2024-07-12 20:38:56.575998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.557 [2024-07-12 20:38:56.576008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.557 [2024-07-12 20:38:56.576022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.557 [2024-07-12 20:38:56.576032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.558 [2024-07-12 20:38:56.576042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.558 [2024-07-12 20:38:56.576052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.558 [2024-07-12 20:38:56.576062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.558 [2024-07-12 20:38:56.576083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.558 [2024-07-12 20:38:56.576105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.558 [2024-07-12 20:38:56.576115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.558 [2024-07-12 20:38:56.576135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.558 [2024-07-12 20:38:56.576156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.558 [2024-07-12 20:38:56.576166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.558 [2024-07-12 20:38:56.576185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.558 [2024-07-12 20:38:56.576199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.558 [2024-07-12 20:38:56.576220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.558 [2024-07-12 20:38:56.576229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.558 [2024-07-12 20:38:56.576249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.558 [2024-07-12 20:38:56.576259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.558 [2024-07-12 20:38:56.576293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.558 [2024-07-12 20:38:56.576307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.558 [2024-07-12 20:38:56.576317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.558 [2024-07-12 20:38:56.576327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.558 [2024-07-12 20:38:56.576337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.558 [2024-07-12 20:38:56.576346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.558 [2024-07-12 20:38:56.576367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.558 [2024-07-12 20:38:56.576382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576392] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.558 [2024-07-12 20:38:56.576403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.558 [2024-07-12 20:38:56.576413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.558 [2024-07-12 20:38:56.576424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.558 [2024-07-12 20:38:56.576435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.558 [2024-07-12 20:38:56.576446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.558 [2024-07-12 20:38:56.576456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.558 [2024-07-12 20:38:56.576466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.558 [2024-07-12 20:38:56.576476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.558 [2024-07-12 20:38:56.576487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.558 [2024-07-12 20:38:56.576499] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.558 [2024-07-12 20:38:56.576512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.558 [2024-07-12 20:38:56.576524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.558 [2024-07-12 20:38:56.576535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.558 [2024-07-12 20:38:56.576546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.558 [2024-07-12 20:38:56.576560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.558 [2024-07-12 20:38:56.576572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.558 [2024-07-12 20:38:56.576583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.558 [2024-07-12 20:38:56.576593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.558 [2024-07-12 20:38:56.576605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.558 [2024-07-12 20:38:56.576616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.558 [2024-07-12 20:38:56.576626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.558 [2024-07-12 20:38:56.576637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.558 [2024-07-12 20:38:56.576647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.558 [2024-07-12 20:38:56.576658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.558 [2024-07-12 20:38:56.576669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.558 [2024-07-12 20:38:56.576680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.558 [2024-07-12 20:38:56.576692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.558 [2024-07-12 20:38:56.576704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.558 [2024-07-12 20:38:56.576716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.558 [2024-07-12 20:38:56.576727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.558 [2024-07-12 20:38:56.576741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.558 [2024-07-12 20:38:56.576753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.576767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.558 [2024-07-12 20:38:56.576788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:22:02.558 [2024-07-12 20:38:56.576799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.600666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.600747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.558 [2024-07-12 20:38:56.600774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.782 ms 00:22:02.558 [2024-07-12 20:38:56.600806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.600960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.600982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.558 [2024-07-12 20:38:56.601008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:22:02.558 [2024-07-12 20:38:56.601023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.614148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.614219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.558 [2024-07-12 20:38:56.614236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.016 ms 00:22:02.558 [2024-07-12 20:38:56.614248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.614317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.614340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.558 [2024-07-12 20:38:56.614352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:02.558 [2024-07-12 20:38:56.614363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.615031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.615058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.558 [2024-07-12 20:38:56.615073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:22:02.558 [2024-07-12 20:38:56.615084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.615312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.615334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.558 [2024-07-12 20:38:56.615353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:22:02.558 [2024-07-12 20:38:56.615364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.622906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.622964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.558 [2024-07-12 20:38:56.622980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.499 ms 00:22:02.558 [2024-07-12 20:38:56.623001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.626088] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:02.558 [2024-07-12 20:38:56.626141] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.558 [2024-07-12 20:38:56.626158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.626170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.558 [2024-07-12 20:38:56.626182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:22:02.558 [2024-07-12 20:38:56.626192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.640552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.640604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.558 [2024-07-12 20:38:56.640621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.309 ms 00:22:02.558 [2024-07-12 20:38:56.640645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.558 [2024-07-12 20:38:56.642583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.558 [2024-07-12 20:38:56.642631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.558 [2024-07-12 20:38:56.642646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:22:02.559 [2024-07-12 20:38:56.642656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.644410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.644458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.559 [2024-07-12 20:38:56.644473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:22:02.559 [2024-07-12 20:38:56.644484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.644915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.644951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.559 [2024-07-12 20:38:56.644971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:22:02.559 [2024-07-12 20:38:56.644982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.666376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.666458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.559 [2024-07-12 20:38:56.666477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.368 ms 00:22:02.559 [2024-07-12 20:38:56.666489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.675469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:02.559 [2024-07-12 20:38:56.680463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.680521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.559 [2024-07-12 20:38:56.680540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.887 ms 00:22:02.559 [2024-07-12 20:38:56.680566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.680719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.680746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.559 [2024-07-12 20:38:56.680765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:02.559 [2024-07-12 20:38:56.680777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.682805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.682857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.559 [2024-07-12 20:38:56.682872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:22:02.559 [2024-07-12 20:38:56.682883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.682917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.682931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.559 [2024-07-12 20:38:56.682955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.559 [2024-07-12 20:38:56.682973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.683013] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.559 [2024-07-12 20:38:56.683040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.683051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.559 [2024-07-12 20:38:56.683070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:02.559 [2024-07-12 20:38:56.683081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.687564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.687617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.559 [2024-07-12 20:38:56.687650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.424 ms 00:22:02.559 [2024-07-12 20:38:56.687705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.687797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.559 [2024-07-12 20:38:56.687821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.559 [2024-07-12 20:38:56.687834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:02.559 [2024-07-12 20:38:56.687845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.559 [2024-07-12 20:38:56.695076] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.840 ms, result 0 00:22:42.213  Copying: 23/1024 [MB] (23 MBps) Copying: 49/1024 [MB] (26 MBps) Copying: 76/1024 [MB] (27 MBps) Copying: 103/1024 [MB] (27 MBps) Copying: 129/1024 [MB] (26 MBps) Copying: 155/1024 [MB] (25 MBps) Copying: 182/1024 [MB] (26 MBps) Copying: 207/1024 [MB] (25 MBps) Copying: 234/1024 [MB] (26 MBps) Copying: 260/1024 [MB] (26 MBps) Copying: 287/1024 [MB] (27 MBps) Copying: 315/1024 [MB] (27 MBps) Copying: 341/1024 [MB] (25 MBps) Copying: 368/1024 [MB] (26 MBps) Copying: 392/1024 [MB] (24 MBps) Copying: 417/1024 [MB] (24 MBps) Copying: 444/1024 [MB] (27 MBps) Copying: 471/1024 [MB] (26 MBps) Copying: 498/1024 [MB] (27 MBps) Copying: 525/1024 [MB] (27 MBps) Copying: 552/1024 [MB] (27 MBps) Copying: 579/1024 [MB] (26 MBps) Copying: 606/1024 [MB] (27 MBps) Copying: 633/1024 [MB] (26 MBps) Copying: 659/1024 [MB] (26 MBps) Copying: 686/1024 [MB] (26 MBps) Copying: 713/1024 [MB] (27 MBps) Copying: 740/1024 [MB] (26 MBps) Copying: 766/1024 [MB] (26 MBps) Copying: 793/1024 [MB] (26 MBps) Copying: 819/1024 [MB] (25 MBps) Copying: 844/1024 [MB] (25 MBps) Copying: 870/1024 [MB] (25 MBps) Copying: 895/1024 [MB] (24 MBps) Copying: 919/1024 [MB] (24 MBps) Copying: 944/1024 [MB] (25 MBps) Copying: 971/1024 [MB] (26 MBps) Copying: 997/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-12 20:39:36.338207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.213 [2024-07-12 20:39:36.338328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.213 [2024-07-12 20:39:36.338356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:42.213 [2024-07-12 20:39:36.338373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.213 [2024-07-12 20:39:36.338413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.213 [2024-07-12 20:39:36.339985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.213 [2024-07-12 20:39:36.340035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.213 [2024-07-12 20:39:36.340055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:22:42.213 [2024-07-12 20:39:36.340082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.213 [2024-07-12 20:39:36.340408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.213 [2024-07-12 20:39:36.340441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.213 [2024-07-12 20:39:36.340473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:22:42.213 [2024-07-12 20:39:36.340488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.213 [2024-07-12 20:39:36.346261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.213 [2024-07-12 20:39:36.346314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.213 [2024-07-12 20:39:36.346343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:22:42.213 [2024-07-12 20:39:36.346359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.213 [2024-07-12 20:39:36.356349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.213 [2024-07-12 20:39:36.356401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.213 [2024-07-12 20:39:36.356422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.935 ms 00:22:42.213 [2024-07-12 20:39:36.356437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.213 [2024-07-12 20:39:36.358179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.213 [2024-07-12 20:39:36.358230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.213 [2024-07-12 20:39:36.358265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:22:42.213 [2024-07-12 20:39:36.358281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.213 [2024-07-12 20:39:36.361531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.213 [2024-07-12 20:39:36.361592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.213 [2024-07-12 20:39:36.361612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:22:42.213 [2024-07-12 20:39:36.361628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.473 [2024-07-12 20:39:36.468671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.473 [2024-07-12 20:39:36.468757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.473 [2024-07-12 20:39:36.468784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.986 ms 00:22:42.473 [2024-07-12 20:39:36.468810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.473 [2024-07-12 20:39:36.471252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.473 [2024-07-12 20:39:36.471299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:42.473 [2024-07-12 20:39:36.471319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.401 ms 00:22:42.473 [2024-07-12 20:39:36.471333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.473 [2024-07-12 20:39:36.472908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.473 [2024-07-12 20:39:36.472954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:42.473 [2024-07-12 20:39:36.472972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:22:42.473 [2024-07-12 20:39:36.472987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.473 [2024-07-12 20:39:36.474184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.473 [2024-07-12 20:39:36.474231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.473 [2024-07-12 20:39:36.474265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:22:42.473 [2024-07-12 20:39:36.474280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.473 [2024-07-12 20:39:36.475462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.473 [2024-07-12 20:39:36.475507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.473 [2024-07-12 20:39:36.475537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:22:42.473 [2024-07-12 20:39:36.475569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.473 [2024-07-12 20:39:36.475616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.473 [2024-07-12 20:39:36.475644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:22:42.473 [2024-07-12 20:39:36.475678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.475991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.473 [2024-07-12 20:39:36.476797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.476999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.474 [2024-07-12 20:39:36.477305] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.474 [2024-07-12 20:39:36.477328] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bb011f9-c31c-4127-9322-4bfe0622a87b 00:22:42.474 [2024-07-12 20:39:36.477345] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:22:42.474 [2024-07-12 20:39:36.477359] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12736 00:22:42.474 [2024-07-12 20:39:36.477373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11776 00:22:42.474 [2024-07-12 20:39:36.477388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0815 00:22:42.474 [2024-07-12 20:39:36.477417] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.474 [2024-07-12 20:39:36.477442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.474 [2024-07-12 20:39:36.477456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.474 [2024-07-12 20:39:36.477470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.474 [2024-07-12 20:39:36.477483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.474 [2024-07-12 20:39:36.477498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.474 [2024-07-12 20:39:36.477518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.474 [2024-07-12 20:39:36.477544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:22:42.474 [2024-07-12 20:39:36.477558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.479838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.474 [2024-07-12 20:39:36.479876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:42.474 [2024-07-12 20:39:36.479896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:22:42.474 [2024-07-12 20:39:36.479911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.480092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.474 [2024-07-12 20:39:36.480125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:42.474 [2024-07-12 20:39:36.480150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:42.474 [2024-07-12 20:39:36.480164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.487898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.487950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.474 [2024-07-12 20:39:36.487971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.488003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.488082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.488101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.474 [2024-07-12 20:39:36.488123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.488137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.488230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.488279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.474 [2024-07-12 20:39:36.488298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.488312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.488348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.488365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.474 [2024-07-12 20:39:36.488380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.488403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.502874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.502960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.474 [2024-07-12 20:39:36.502984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.502998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.514272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.514335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.474 [2024-07-12 20:39:36.514368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.514384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.514470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.514490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:42.474 [2024-07-12 20:39:36.514521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.514535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.514593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.514611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:42.474 [2024-07-12 20:39:36.514627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.514641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.514764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.514787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:42.474 [2024-07-12 20:39:36.514803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.514817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.514873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.514895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:42.474 [2024-07-12 20:39:36.514924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.514939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.515006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.515028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:42.474 [2024-07-12 20:39:36.515044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.515058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.515120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.474 [2024-07-12 20:39:36.515154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:42.474 [2024-07-12 20:39:36.515170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.474 [2024-07-12 20:39:36.515185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.474 [2024-07-12 20:39:36.515380] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 177.135 ms, result 0 00:22:42.734 00:22:42.734 00:22:42.734 20:39:36 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:45.265 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.265 Process with pid 93018 is not found 00:22:45.265 Remove shared memory files 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 93018 00:22:45.265 20:39:39 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 93018 ']' 00:22:45.265 20:39:39 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 93018 00:22:45.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93018) - No such process 00:22:45.265 20:39:39 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 93018 is not found' 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:45.265 20:39:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:45.265 00:22:45.265 real 3m3.491s 00:22:45.265 user 2m49.618s 00:22:45.265 sys 0m15.970s 00:22:45.265 20:39:39 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.265 ************************************ 00:22:45.265 END TEST ftl_restore 00:22:45.265 ************************************ 00:22:45.265 20:39:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:45.265 20:39:39 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:45.265 20:39:39 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:45.265 20:39:39 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:45.265 20:39:39 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.265 20:39:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:45.265 ************************************ 00:22:45.265 START TEST ftl_dirty_shutdown 00:22:45.265 ************************************ 00:22:45.265 20:39:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:45.524 * Looking for test storage... 00:22:45.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=94962 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 94962 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 94962 ']' 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.524 20:39:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.524 [2024-07-12 20:39:39.594477] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:45.524 [2024-07-12 20:39:39.594650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94962 ] 00:22:45.783 [2024-07-12 20:39:39.745897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:45.783 [2024-07-12 20:39:39.770270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.783 [2024-07-12 20:39:39.864995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:46.717 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:46.975 20:39:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:47.233 { 00:22:47.233 "name": "nvme0n1", 00:22:47.233 "aliases": [ 00:22:47.233 "51fe570a-b429-4871-8a86-a750f22e404c" 00:22:47.233 ], 00:22:47.233 "product_name": "NVMe disk", 00:22:47.233 "block_size": 4096, 00:22:47.233 "num_blocks": 1310720, 00:22:47.233 "uuid": "51fe570a-b429-4871-8a86-a750f22e404c", 00:22:47.233 "assigned_rate_limits": { 00:22:47.233 "rw_ios_per_sec": 0, 00:22:47.233 "rw_mbytes_per_sec": 0, 00:22:47.233 "r_mbytes_per_sec": 0, 00:22:47.233 "w_mbytes_per_sec": 0 00:22:47.233 }, 00:22:47.233 "claimed": true, 00:22:47.233 "claim_type": "read_many_write_one", 00:22:47.233 "zoned": false, 00:22:47.233 "supported_io_types": { 00:22:47.233 "read": true, 00:22:47.233 "write": true, 00:22:47.233 "unmap": true, 00:22:47.233 "flush": true, 00:22:47.233 "reset": true, 00:22:47.233 "nvme_admin": true, 00:22:47.233 "nvme_io": true, 00:22:47.233 "nvme_io_md": false, 00:22:47.233 "write_zeroes": true, 00:22:47.233 "zcopy": false, 00:22:47.233 "get_zone_info": false, 00:22:47.233 "zone_management": false, 00:22:47.233 "zone_append": false, 00:22:47.233 "compare": true, 00:22:47.233 "compare_and_write": false, 00:22:47.233 "abort": true, 00:22:47.233 "seek_hole": false, 00:22:47.233 "seek_data": false, 00:22:47.233 "copy": true, 00:22:47.233 "nvme_iov_md": false 00:22:47.233 }, 00:22:47.233 "driver_specific": { 00:22:47.233 "nvme": [ 00:22:47.233 { 00:22:47.233 "pci_address": "0000:00:11.0", 00:22:47.233 "trid": { 00:22:47.233 "trtype": "PCIe", 00:22:47.233 "traddr": "0000:00:11.0" 00:22:47.233 }, 00:22:47.233 "ctrlr_data": { 00:22:47.233 "cntlid": 0, 00:22:47.233 "vendor_id": "0x1b36", 00:22:47.233 "model_number": "QEMU NVMe Ctrl", 00:22:47.233 "serial_number": "12341", 00:22:47.233 "firmware_revision": "8.0.0", 00:22:47.233 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:47.233 "oacs": { 00:22:47.233 "security": 0, 00:22:47.233 "format": 1, 00:22:47.233 "firmware": 0, 00:22:47.233 "ns_manage": 1 00:22:47.233 }, 00:22:47.233 "multi_ctrlr": false, 00:22:47.233 "ana_reporting": false 00:22:47.233 }, 00:22:47.233 "vs": { 00:22:47.233 "nvme_version": "1.4" 00:22:47.233 }, 00:22:47.233 "ns_data": { 00:22:47.233 "id": 1, 00:22:47.233 "can_share": false 00:22:47.233 } 00:22:47.233 } 00:22:47.233 ], 00:22:47.233 "mp_policy": "active_passive" 00:22:47.233 } 00:22:47.233 } 00:22:47.233 ]' 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:47.233 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:47.491 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=104156dd-fd6a-41cf-81ae-968e98661e91 00:22:47.491 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:47.491 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 104156dd-fd6a-41cf-81ae-968e98661e91 00:22:47.751 20:39:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:48.011 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=3bce1f5a-7b7e-4e56-85c2-b2592a6eb12f 00:22:48.011 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3bce1f5a-7b7e-4e56-85c2-b2592a6eb12f 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:48.270 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:48.528 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:48.528 { 00:22:48.528 "name": "aee4479f-1f68-48a2-a82f-6f03a2093482", 00:22:48.528 "aliases": [ 00:22:48.528 "lvs/nvme0n1p0" 00:22:48.528 ], 00:22:48.528 "product_name": "Logical Volume", 00:22:48.528 "block_size": 4096, 00:22:48.528 "num_blocks": 26476544, 00:22:48.528 "uuid": "aee4479f-1f68-48a2-a82f-6f03a2093482", 00:22:48.528 "assigned_rate_limits": { 00:22:48.528 "rw_ios_per_sec": 0, 00:22:48.528 "rw_mbytes_per_sec": 0, 00:22:48.528 "r_mbytes_per_sec": 0, 00:22:48.528 "w_mbytes_per_sec": 0 00:22:48.528 }, 00:22:48.528 "claimed": false, 00:22:48.528 "zoned": false, 00:22:48.528 "supported_io_types": { 00:22:48.528 "read": true, 00:22:48.528 "write": true, 00:22:48.528 "unmap": true, 00:22:48.528 "flush": false, 00:22:48.528 "reset": true, 00:22:48.528 "nvme_admin": false, 00:22:48.528 "nvme_io": false, 00:22:48.528 "nvme_io_md": false, 00:22:48.528 "write_zeroes": true, 00:22:48.528 "zcopy": false, 00:22:48.528 "get_zone_info": false, 00:22:48.528 "zone_management": false, 00:22:48.528 "zone_append": false, 00:22:48.528 "compare": false, 00:22:48.528 "compare_and_write": false, 00:22:48.528 "abort": false, 00:22:48.528 "seek_hole": true, 00:22:48.528 "seek_data": true, 00:22:48.528 "copy": false, 00:22:48.528 "nvme_iov_md": false 00:22:48.528 }, 00:22:48.528 "driver_specific": { 00:22:48.528 "lvol": { 00:22:48.528 "lvol_store_uuid": "3bce1f5a-7b7e-4e56-85c2-b2592a6eb12f", 00:22:48.528 "base_bdev": "nvme0n1", 00:22:48.528 "thin_provision": true, 00:22:48.528 "num_allocated_clusters": 0, 00:22:48.528 "snapshot": false, 00:22:48.528 "clone": false, 00:22:48.528 "esnap_clone": false 00:22:48.528 } 00:22:48.528 } 00:22:48.528 } 00:22:48.528 ]' 00:22:48.528 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:48.786 20:39:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:49.043 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:49.300 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:49.300 { 00:22:49.300 "name": "aee4479f-1f68-48a2-a82f-6f03a2093482", 00:22:49.300 "aliases": [ 00:22:49.300 "lvs/nvme0n1p0" 00:22:49.300 ], 00:22:49.301 "product_name": "Logical Volume", 00:22:49.301 "block_size": 4096, 00:22:49.301 "num_blocks": 26476544, 00:22:49.301 "uuid": "aee4479f-1f68-48a2-a82f-6f03a2093482", 00:22:49.301 "assigned_rate_limits": { 00:22:49.301 "rw_ios_per_sec": 0, 00:22:49.301 "rw_mbytes_per_sec": 0, 00:22:49.301 "r_mbytes_per_sec": 0, 00:22:49.301 "w_mbytes_per_sec": 0 00:22:49.301 }, 00:22:49.301 "claimed": false, 00:22:49.301 "zoned": false, 00:22:49.301 "supported_io_types": { 00:22:49.301 "read": true, 00:22:49.301 "write": true, 00:22:49.301 "unmap": true, 00:22:49.301 "flush": false, 00:22:49.301 "reset": true, 00:22:49.301 "nvme_admin": false, 00:22:49.301 "nvme_io": false, 00:22:49.301 "nvme_io_md": false, 00:22:49.301 "write_zeroes": true, 00:22:49.301 "zcopy": false, 00:22:49.301 "get_zone_info": false, 00:22:49.301 "zone_management": false, 00:22:49.301 "zone_append": false, 00:22:49.301 "compare": false, 00:22:49.301 "compare_and_write": false, 00:22:49.301 "abort": false, 00:22:49.301 "seek_hole": true, 00:22:49.301 "seek_data": true, 00:22:49.301 "copy": false, 00:22:49.301 "nvme_iov_md": false 00:22:49.301 }, 00:22:49.301 "driver_specific": { 00:22:49.301 "lvol": { 00:22:49.301 "lvol_store_uuid": "3bce1f5a-7b7e-4e56-85c2-b2592a6eb12f", 00:22:49.301 "base_bdev": "nvme0n1", 00:22:49.301 "thin_provision": true, 00:22:49.301 "num_allocated_clusters": 0, 00:22:49.301 "snapshot": false, 00:22:49.301 "clone": false, 00:22:49.301 "esnap_clone": false 00:22:49.301 } 00:22:49.301 } 00:22:49.301 } 00:22:49.301 ]' 00:22:49.301 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:49.301 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:49.301 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:49.558 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:49.558 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:49.558 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:49.558 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:49.558 20:39:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:49.558 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:49.816 20:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:49.816 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:49.816 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:49.816 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:49.816 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:49.816 20:39:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aee4479f-1f68-48a2-a82f-6f03a2093482 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:50.074 { 00:22:50.074 "name": "aee4479f-1f68-48a2-a82f-6f03a2093482", 00:22:50.074 "aliases": [ 00:22:50.074 "lvs/nvme0n1p0" 00:22:50.074 ], 00:22:50.074 "product_name": "Logical Volume", 00:22:50.074 "block_size": 4096, 00:22:50.074 "num_blocks": 26476544, 00:22:50.074 "uuid": "aee4479f-1f68-48a2-a82f-6f03a2093482", 00:22:50.074 "assigned_rate_limits": { 00:22:50.074 "rw_ios_per_sec": 0, 00:22:50.074 "rw_mbytes_per_sec": 0, 00:22:50.074 "r_mbytes_per_sec": 0, 00:22:50.074 "w_mbytes_per_sec": 0 00:22:50.074 }, 00:22:50.074 "claimed": false, 00:22:50.074 "zoned": false, 00:22:50.074 "supported_io_types": { 00:22:50.074 "read": true, 00:22:50.074 "write": true, 00:22:50.074 "unmap": true, 00:22:50.074 "flush": false, 00:22:50.074 "reset": true, 00:22:50.074 "nvme_admin": false, 00:22:50.074 "nvme_io": false, 00:22:50.074 "nvme_io_md": false, 00:22:50.074 "write_zeroes": true, 00:22:50.074 "zcopy": false, 00:22:50.074 "get_zone_info": false, 00:22:50.074 "zone_management": false, 00:22:50.074 "zone_append": false, 00:22:50.074 "compare": false, 00:22:50.074 "compare_and_write": false, 00:22:50.074 "abort": false, 00:22:50.074 "seek_hole": true, 00:22:50.074 "seek_data": true, 00:22:50.074 "copy": false, 00:22:50.074 "nvme_iov_md": false 00:22:50.074 }, 00:22:50.074 "driver_specific": { 00:22:50.074 "lvol": { 00:22:50.074 "lvol_store_uuid": "3bce1f5a-7b7e-4e56-85c2-b2592a6eb12f", 00:22:50.074 "base_bdev": "nvme0n1", 00:22:50.074 "thin_provision": true, 00:22:50.074 "num_allocated_clusters": 0, 00:22:50.074 "snapshot": false, 00:22:50.074 "clone": false, 00:22:50.074 "esnap_clone": false 00:22:50.074 } 00:22:50.074 } 00:22:50.074 } 00:22:50.074 ]' 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d aee4479f-1f68-48a2-a82f-6f03a2093482 --l2p_dram_limit 10' 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:50.074 20:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aee4479f-1f68-48a2-a82f-6f03a2093482 --l2p_dram_limit 10 -c nvc0n1p0 00:22:50.333 [2024-07-12 20:39:44.345597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.345699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.333 [2024-07-12 20:39:44.345722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:50.333 [2024-07-12 20:39:44.345739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.345828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.345862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.333 [2024-07-12 20:39:44.345877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:50.333 [2024-07-12 20:39:44.345895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.345928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.333 [2024-07-12 20:39:44.346331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.333 [2024-07-12 20:39:44.346368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.346386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.333 [2024-07-12 20:39:44.346401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:22:50.333 [2024-07-12 20:39:44.346416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.346698] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 04eac149-743a-445f-850f-492299bcef1c 00:22:50.333 [2024-07-12 20:39:44.348470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.348514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:50.333 [2024-07-12 20:39:44.348535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:50.333 [2024-07-12 20:39:44.348548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.358363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.358420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.333 [2024-07-12 20:39:44.358446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.737 ms 00:22:50.333 [2024-07-12 20:39:44.358464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.358626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.358646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.333 [2024-07-12 20:39:44.358664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:50.333 [2024-07-12 20:39:44.358677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.358776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.358795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.333 [2024-07-12 20:39:44.358811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:50.333 [2024-07-12 20:39:44.358824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.358866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.333 [2024-07-12 20:39:44.361182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.361224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.333 [2024-07-12 20:39:44.361254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:22:50.333 [2024-07-12 20:39:44.361284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.361342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.361366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.333 [2024-07-12 20:39:44.361380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:50.333 [2024-07-12 20:39:44.361398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.361428] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:50.333 [2024-07-12 20:39:44.361601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.333 [2024-07-12 20:39:44.361623] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.333 [2024-07-12 20:39:44.361645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:50.333 [2024-07-12 20:39:44.361670] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.333 [2024-07-12 20:39:44.361691] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.333 [2024-07-12 20:39:44.361704] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:50.333 [2024-07-12 20:39:44.361720] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.333 [2024-07-12 20:39:44.361739] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.333 [2024-07-12 20:39:44.361754] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.333 [2024-07-12 20:39:44.361767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.361781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.333 [2024-07-12 20:39:44.361794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:22:50.333 [2024-07-12 20:39:44.361808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.361901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.333 [2024-07-12 20:39:44.361934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.333 [2024-07-12 20:39:44.361951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:50.333 [2024-07-12 20:39:44.361966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.333 [2024-07-12 20:39:44.362077] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.333 [2024-07-12 20:39:44.362101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.333 [2024-07-12 20:39:44.362123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.333 [2024-07-12 20:39:44.362143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.333 [2024-07-12 20:39:44.362155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.333 [2024-07-12 20:39:44.362168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.333 [2024-07-12 20:39:44.362179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:50.333 [2024-07-12 20:39:44.362193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.333 [2024-07-12 20:39:44.362204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:50.333 [2024-07-12 20:39:44.362217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.333 [2024-07-12 20:39:44.362228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.333 [2024-07-12 20:39:44.362257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:50.333 [2024-07-12 20:39:44.362273] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.333 [2024-07-12 20:39:44.362290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.333 [2024-07-12 20:39:44.362302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:50.333 [2024-07-12 20:39:44.362315] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.333 [2024-07-12 20:39:44.362326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.333 [2024-07-12 20:39:44.362340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:50.333 [2024-07-12 20:39:44.362350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.333 [2024-07-12 20:39:44.362364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.333 [2024-07-12 20:39:44.362377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:50.333 [2024-07-12 20:39:44.362390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.333 [2024-07-12 20:39:44.362402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.333 [2024-07-12 20:39:44.362415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:50.333 [2024-07-12 20:39:44.362426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.333 [2024-07-12 20:39:44.362441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.334 [2024-07-12 20:39:44.362452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:50.334 [2024-07-12 20:39:44.362465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.334 [2024-07-12 20:39:44.362477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.334 [2024-07-12 20:39:44.362649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:50.334 [2024-07-12 20:39:44.362674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.334 [2024-07-12 20:39:44.362690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.334 [2024-07-12 20:39:44.362702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:50.334 [2024-07-12 20:39:44.362715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.334 [2024-07-12 20:39:44.362726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.334 [2024-07-12 20:39:44.362740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:50.334 [2024-07-12 20:39:44.362751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.334 [2024-07-12 20:39:44.362765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.334 [2024-07-12 20:39:44.362776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:50.334 [2024-07-12 20:39:44.362790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.334 [2024-07-12 20:39:44.362801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.334 [2024-07-12 20:39:44.362814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:50.334 [2024-07-12 20:39:44.362826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.334 [2024-07-12 20:39:44.362838] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.334 [2024-07-12 20:39:44.362860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.334 [2024-07-12 20:39:44.362886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.334 [2024-07-12 20:39:44.362903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.334 [2024-07-12 20:39:44.362918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.334 [2024-07-12 20:39:44.362929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.334 [2024-07-12 20:39:44.362943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.334 [2024-07-12 20:39:44.362954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.334 [2024-07-12 20:39:44.362967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.334 [2024-07-12 20:39:44.362982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.334 [2024-07-12 20:39:44.363001] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.334 [2024-07-12 20:39:44.363016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.334 [2024-07-12 20:39:44.363032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:50.334 [2024-07-12 20:39:44.363045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:50.334 [2024-07-12 20:39:44.363060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:50.334 [2024-07-12 20:39:44.363073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:50.334 [2024-07-12 20:39:44.363088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:50.334 [2024-07-12 20:39:44.363100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:50.334 [2024-07-12 20:39:44.363118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:50.334 [2024-07-12 20:39:44.363130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:50.334 [2024-07-12 20:39:44.363158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:50.334 [2024-07-12 20:39:44.363172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:50.334 [2024-07-12 20:39:44.363187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:50.334 [2024-07-12 20:39:44.363199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:50.334 [2024-07-12 20:39:44.363213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:50.334 [2024-07-12 20:39:44.363226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:50.334 [2024-07-12 20:39:44.363252] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.334 [2024-07-12 20:39:44.363269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.334 [2024-07-12 20:39:44.363287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.334 [2024-07-12 20:39:44.363300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.334 [2024-07-12 20:39:44.363316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.334 [2024-07-12 20:39:44.363328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.334 [2024-07-12 20:39:44.363343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.334 [2024-07-12 20:39:44.363356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.334 [2024-07-12 20:39:44.363373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:22:50.334 [2024-07-12 20:39:44.363386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.334 [2024-07-12 20:39:44.363477] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:50.334 [2024-07-12 20:39:44.363506] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:52.860 [2024-07-12 20:39:46.858554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.858631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:52.860 [2024-07-12 20:39:46.858659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2495.066 ms 00:22:52.860 [2024-07-12 20:39:46.858684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.873247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.873321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:52.860 [2024-07-12 20:39:46.873365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:22:52.860 [2024-07-12 20:39:46.873379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.873522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.873540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:52.860 [2024-07-12 20:39:46.873568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:52.860 [2024-07-12 20:39:46.873580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.887205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.887271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:52.860 [2024-07-12 20:39:46.887300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.525 ms 00:22:52.860 [2024-07-12 20:39:46.887315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.887372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.887389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:52.860 [2024-07-12 20:39:46.887406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:52.860 [2024-07-12 20:39:46.887429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.888048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.888085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:52.860 [2024-07-12 20:39:46.888105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:22:52.860 [2024-07-12 20:39:46.888122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.888297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.888316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:52.860 [2024-07-12 20:39:46.888331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:22:52.860 [2024-07-12 20:39:46.888344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.898196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.898268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:52.860 [2024-07-12 20:39:46.898291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.820 ms 00:22:52.860 [2024-07-12 20:39:46.898305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.909066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:52.860 [2024-07-12 20:39:46.913523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.913564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:52.860 [2024-07-12 20:39:46.913583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.112 ms 00:22:52.860 [2024-07-12 20:39:46.913598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.982741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.982829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:52.860 [2024-07-12 20:39:46.982853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.096 ms 00:22:52.860 [2024-07-12 20:39:46.982872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.983118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.983161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:52.860 [2024-07-12 20:39:46.983188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:22:52.860 [2024-07-12 20:39:46.983204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.986963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.987018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:52.860 [2024-07-12 20:39:46.987037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:22:52.860 [2024-07-12 20:39:46.987056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.990189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.990253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:52.860 [2024-07-12 20:39:46.990273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:22:52.860 [2024-07-12 20:39:46.990299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.860 [2024-07-12 20:39:46.990744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.860 [2024-07-12 20:39:46.990782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:52.860 [2024-07-12 20:39:46.990807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:22:52.860 [2024-07-12 20:39:46.990838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.119 [2024-07-12 20:39:47.028642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.119 [2024-07-12 20:39:47.028728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:53.119 [2024-07-12 20:39:47.028750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.772 ms 00:22:53.119 [2024-07-12 20:39:47.028767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.119 [2024-07-12 20:39:47.034139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.119 [2024-07-12 20:39:47.034191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:53.119 [2024-07-12 20:39:47.034209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:22:53.119 [2024-07-12 20:39:47.034225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.119 [2024-07-12 20:39:47.037988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.119 [2024-07-12 20:39:47.038037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:53.119 [2024-07-12 20:39:47.038054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.703 ms 00:22:53.119 [2024-07-12 20:39:47.038068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.119 [2024-07-12 20:39:47.042105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.119 [2024-07-12 20:39:47.042169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.119 [2024-07-12 20:39:47.042188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.989 ms 00:22:53.119 [2024-07-12 20:39:47.042205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.119 [2024-07-12 20:39:47.042281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.119 [2024-07-12 20:39:47.042308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.119 [2024-07-12 20:39:47.042323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:53.119 [2024-07-12 20:39:47.042338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.119 [2024-07-12 20:39:47.042420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.119 [2024-07-12 20:39:47.042442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.119 [2024-07-12 20:39:47.042458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:53.119 [2024-07-12 20:39:47.042473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.119 [2024-07-12 20:39:47.043762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2697.656 ms, result 0 00:22:53.119 { 00:22:53.119 "name": "ftl0", 00:22:53.119 "uuid": "04eac149-743a-445f-850f-492299bcef1c" 00:22:53.119 } 00:22:53.119 20:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:53.119 20:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:53.378 20:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:53.378 20:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:53.378 20:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:53.637 /dev/nbd0 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:53.637 1+0 records in 00:22:53.637 1+0 records out 00:22:53.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273704 s, 15.0 MB/s 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:22:53.637 20:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:53.637 [2024-07-12 20:39:47.763205] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:53.637 [2024-07-12 20:39:47.763413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95099 ] 00:22:53.895 [2024-07-12 20:39:47.916045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:53.896 [2024-07-12 20:39:47.935446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.896 [2024-07-12 20:39:48.029553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.547  Copying: 168/1024 [MB] (168 MBps) Copying: 336/1024 [MB] (168 MBps) Copying: 506/1024 [MB] (169 MBps) Copying: 675/1024 [MB] (168 MBps) Copying: 839/1024 [MB] (164 MBps) Copying: 1000/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 166 MBps) 00:23:00.547 00:23:00.547 20:39:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:03.120 20:39:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:03.120 [2024-07-12 20:39:56.851095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:03.120 [2024-07-12 20:39:56.851323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95196 ] 00:23:03.120 [2024-07-12 20:39:57.011846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:03.120 [2024-07-12 20:39:57.033829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.120 [2024-07-12 20:39:57.127210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.017  Copying: 15/1024 [MB] (15 MBps) Copying: 31/1024 [MB] (15 MBps) Copying: 46/1024 [MB] (15 MBps) Copying: 62/1024 [MB] (15 MBps) Copying: 79/1024 [MB] (17 MBps) Copying: 96/1024 [MB] (16 MBps) Copying: 113/1024 [MB] (16 MBps) Copying: 129/1024 [MB] (16 MBps) Copying: 146/1024 [MB] (16 MBps) Copying: 163/1024 [MB] (17 MBps) Copying: 180/1024 [MB] (16 MBps) Copying: 197/1024 [MB] (16 MBps) Copying: 214/1024 [MB] (16 MBps) Copying: 231/1024 [MB] (17 MBps) Copying: 248/1024 [MB] (17 MBps) Copying: 265/1024 [MB] (16 MBps) Copying: 281/1024 [MB] (16 MBps) Copying: 298/1024 [MB] (16 MBps) Copying: 315/1024 [MB] (16 MBps) Copying: 332/1024 [MB] (16 MBps) Copying: 348/1024 [MB] (16 MBps) Copying: 364/1024 [MB] (16 MBps) Copying: 381/1024 [MB] (17 MBps) Copying: 398/1024 [MB] (16 MBps) Copying: 415/1024 [MB] (17 MBps) Copying: 432/1024 [MB] (16 MBps) Copying: 448/1024 [MB] (16 MBps) Copying: 465/1024 [MB] (16 MBps) Copying: 480/1024 [MB] (15 MBps) Copying: 497/1024 [MB] (16 MBps) Copying: 514/1024 [MB] (17 MBps) Copying: 532/1024 [MB] (17 MBps) Copying: 549/1024 [MB] (17 MBps) Copying: 566/1024 [MB] (17 MBps) Copying: 582/1024 [MB] (16 MBps) Copying: 599/1024 [MB] (16 MBps) Copying: 617/1024 [MB] (17 MBps) Copying: 634/1024 [MB] (17 MBps) Copying: 650/1024 [MB] (16 MBps) Copying: 666/1024 [MB] (16 MBps) Copying: 683/1024 [MB] (16 MBps) Copying: 700/1024 [MB] (16 MBps) Copying: 716/1024 [MB] (16 MBps) Copying: 733/1024 [MB] (16 MBps) Copying: 750/1024 [MB] (16 MBps) Copying: 767/1024 [MB] (16 MBps) Copying: 783/1024 [MB] (16 MBps) Copying: 800/1024 [MB] (16 MBps) Copying: 816/1024 [MB] (16 MBps) Copying: 832/1024 [MB] (16 MBps) Copying: 849/1024 [MB] (16 MBps) Copying: 865/1024 [MB] (16 MBps) Copying: 882/1024 [MB] (16 MBps) Copying: 898/1024 [MB] (16 MBps) Copying: 914/1024 [MB] (16 MBps) Copying: 931/1024 [MB] (16 MBps) Copying: 947/1024 [MB] (16 MBps) Copying: 964/1024 [MB] (16 MBps) Copying: 980/1024 [MB] (16 MBps) Copying: 996/1024 [MB] (16 MBps) Copying: 1013/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:24:05.017 00:24:05.275 20:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:05.275 20:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:05.534 20:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:05.794 [2024-07-12 20:40:59.698492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.698564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:05.794 [2024-07-12 20:40:59.698590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:05.794 [2024-07-12 20:40:59.698607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.698677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:05.794 [2024-07-12 20:40:59.699629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.699686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:05.794 [2024-07-12 20:40:59.699702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:24:05.794 [2024-07-12 20:40:59.699718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.701643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.701746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:05.794 [2024-07-12 20:40:59.701764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:24:05.794 [2024-07-12 20:40:59.701778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.718847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.718894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:05.794 [2024-07-12 20:40:59.718913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.042 ms 00:24:05.794 [2024-07-12 20:40:59.718930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.726126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.726180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:05.794 [2024-07-12 20:40:59.726204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.139 ms 00:24:05.794 [2024-07-12 20:40:59.726220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.727919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.727995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:05.794 [2024-07-12 20:40:59.728012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:24:05.794 [2024-07-12 20:40:59.728025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.732548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.732599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:05.794 [2024-07-12 20:40:59.732617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.481 ms 00:24:05.794 [2024-07-12 20:40:59.732632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.732801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.732827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:05.794 [2024-07-12 20:40:59.732842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:24:05.794 [2024-07-12 20:40:59.732861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.794 [2024-07-12 20:40:59.734957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.794 [2024-07-12 20:40:59.735000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:05.794 [2024-07-12 20:40:59.735016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:24:05.795 [2024-07-12 20:40:59.735031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.795 [2024-07-12 20:40:59.736686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.795 [2024-07-12 20:40:59.736785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:05.795 [2024-07-12 20:40:59.736800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:24:05.795 [2024-07-12 20:40:59.736814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.795 [2024-07-12 20:40:59.738018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.795 [2024-07-12 20:40:59.738090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:05.795 [2024-07-12 20:40:59.738106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:24:05.795 [2024-07-12 20:40:59.738123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.795 [2024-07-12 20:40:59.739319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.795 [2024-07-12 20:40:59.739362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:05.795 [2024-07-12 20:40:59.739378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:24:05.795 [2024-07-12 20:40:59.739392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.795 [2024-07-12 20:40:59.739436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:05.795 [2024-07-12 20:40:59.739476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.739987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:05.795 [2024-07-12 20:40:59.740456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:05.796 [2024-07-12 20:40:59.740969] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:05.796 [2024-07-12 20:40:59.740988] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04eac149-743a-445f-850f-492299bcef1c 00:24:05.796 [2024-07-12 20:40:59.741003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:05.796 [2024-07-12 20:40:59.741015] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:05.796 [2024-07-12 20:40:59.741031] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:05.796 [2024-07-12 20:40:59.741043] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:05.796 [2024-07-12 20:40:59.741058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:05.796 [2024-07-12 20:40:59.741071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:05.796 [2024-07-12 20:40:59.741090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:05.796 [2024-07-12 20:40:59.741101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:05.796 [2024-07-12 20:40:59.741114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:05.796 [2024-07-12 20:40:59.741126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.796 [2024-07-12 20:40:59.741140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:05.796 [2024-07-12 20:40:59.741162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:24:05.796 [2024-07-12 20:40:59.741178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.743485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.796 [2024-07-12 20:40:59.743533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:05.796 [2024-07-12 20:40:59.743550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:24:05.796 [2024-07-12 20:40:59.743564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.743722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.796 [2024-07-12 20:40:59.743742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:05.796 [2024-07-12 20:40:59.743766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:05.796 [2024-07-12 20:40:59.743781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.752612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.796 [2024-07-12 20:40:59.752662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.796 [2024-07-12 20:40:59.752680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.796 [2024-07-12 20:40:59.752700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.752771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.796 [2024-07-12 20:40:59.752792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.796 [2024-07-12 20:40:59.752806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.796 [2024-07-12 20:40:59.752821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.752929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.796 [2024-07-12 20:40:59.752959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.796 [2024-07-12 20:40:59.752974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.796 [2024-07-12 20:40:59.752989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.753020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.796 [2024-07-12 20:40:59.753040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.796 [2024-07-12 20:40:59.753067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.796 [2024-07-12 20:40:59.753081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.767463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.796 [2024-07-12 20:40:59.767543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.796 [2024-07-12 20:40:59.767564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.796 [2024-07-12 20:40:59.767586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.778580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.796 [2024-07-12 20:40:59.778643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.796 [2024-07-12 20:40:59.778662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.796 [2024-07-12 20:40:59.778678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.796 [2024-07-12 20:40:59.778793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.796 [2024-07-12 20:40:59.778820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.796 [2024-07-12 20:40:59.778834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.797 [2024-07-12 20:40:59.778864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.797 [2024-07-12 20:40:59.778936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.797 [2024-07-12 20:40:59.778962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.797 [2024-07-12 20:40:59.778976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.797 [2024-07-12 20:40:59.778989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.797 [2024-07-12 20:40:59.779084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.797 [2024-07-12 20:40:59.779108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.797 [2024-07-12 20:40:59.779122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.797 [2024-07-12 20:40:59.779136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.797 [2024-07-12 20:40:59.779187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.797 [2024-07-12 20:40:59.779210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.797 [2024-07-12 20:40:59.779226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.797 [2024-07-12 20:40:59.779299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.797 [2024-07-12 20:40:59.779355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.797 [2024-07-12 20:40:59.779380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.797 [2024-07-12 20:40:59.779394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.797 [2024-07-12 20:40:59.779408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.797 [2024-07-12 20:40:59.779482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.797 [2024-07-12 20:40:59.779510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.797 [2024-07-12 20:40:59.779524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.797 [2024-07-12 20:40:59.779538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.797 [2024-07-12 20:40:59.779735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.182 ms, result 0 00:24:05.797 true 00:24:05.797 20:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 94962 00:24:05.797 20:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid94962 00:24:05.797 20:40:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:05.797 [2024-07-12 20:40:59.923586] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:24:05.797 [2024-07-12 20:40:59.923812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95827 ] 00:24:06.055 [2024-07-12 20:41:00.079847] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:06.055 [2024-07-12 20:41:00.101183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.055 [2024-07-12 20:41:00.191735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.803  Copying: 166/1024 [MB] (166 MBps) Copying: 331/1024 [MB] (165 MBps) Copying: 491/1024 [MB] (159 MBps) Copying: 656/1024 [MB] (165 MBps) Copying: 830/1024 [MB] (173 MBps) Copying: 998/1024 [MB] (167 MBps) Copying: 1024/1024 [MB] (average 166 MBps) 00:24:12.803 00:24:12.803 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 94962 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:12.803 20:41:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:12.803 [2024-07-12 20:41:06.821438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:24:12.803 [2024-07-12 20:41:06.821665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95901 ] 00:24:13.062 [2024-07-12 20:41:06.965918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:13.062 [2024-07-12 20:41:06.987497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.062 [2024-07-12 20:41:07.077753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.320 [2024-07-12 20:41:07.210601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.320 [2024-07-12 20:41:07.210687] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.320 [2024-07-12 20:41:07.276649] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:13.320 [2024-07-12 20:41:07.276968] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:13.320 [2024-07-12 20:41:07.277257] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:13.580 [2024-07-12 20:41:07.545953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.546025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:13.580 [2024-07-12 20:41:07.546043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:13.580 [2024-07-12 20:41:07.546056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.546132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.546153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.580 [2024-07-12 20:41:07.546166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:13.580 [2024-07-12 20:41:07.546176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.546206] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:13.580 [2024-07-12 20:41:07.546521] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:13.580 [2024-07-12 20:41:07.546551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.546563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.580 [2024-07-12 20:41:07.546576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:24:13.580 [2024-07-12 20:41:07.546587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.548693] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:13.580 [2024-07-12 20:41:07.551744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.551797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:13.580 [2024-07-12 20:41:07.551814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:24:13.580 [2024-07-12 20:41:07.551825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.551899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.551926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:13.580 [2024-07-12 20:41:07.551939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:13.580 [2024-07-12 20:41:07.551954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.561209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.561283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.580 [2024-07-12 20:41:07.561299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.191 ms 00:24:13.580 [2024-07-12 20:41:07.561342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.561449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.561481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.580 [2024-07-12 20:41:07.561506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:13.580 [2024-07-12 20:41:07.561520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.561618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.561646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:13.580 [2024-07-12 20:41:07.561660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:13.580 [2024-07-12 20:41:07.561677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.561723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:13.580 [2024-07-12 20:41:07.564075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.580 [2024-07-12 20:41:07.564128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.580 [2024-07-12 20:41:07.564153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.362 ms 00:24:13.580 [2024-07-12 20:41:07.564164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.580 [2024-07-12 20:41:07.564212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.581 [2024-07-12 20:41:07.564229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:13.581 [2024-07-12 20:41:07.564240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:13.581 [2024-07-12 20:41:07.564320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.581 [2024-07-12 20:41:07.564376] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:13.581 [2024-07-12 20:41:07.564416] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:13.581 [2024-07-12 20:41:07.564476] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:13.581 [2024-07-12 20:41:07.564509] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:13.581 [2024-07-12 20:41:07.564641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:13.581 [2024-07-12 20:41:07.564664] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:13.581 [2024-07-12 20:41:07.564679] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:13.581 [2024-07-12 20:41:07.564695] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:13.581 [2024-07-12 20:41:07.564736] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:13.581 [2024-07-12 20:41:07.564767] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:13.581 [2024-07-12 20:41:07.564784] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:13.581 [2024-07-12 20:41:07.564797] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:13.581 [2024-07-12 20:41:07.564821] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:13.581 [2024-07-12 20:41:07.564852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.581 [2024-07-12 20:41:07.564864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:13.581 [2024-07-12 20:41:07.564892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:24:13.581 [2024-07-12 20:41:07.564919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.581 [2024-07-12 20:41:07.565035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.581 [2024-07-12 20:41:07.565055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:13.581 [2024-07-12 20:41:07.565067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:13.581 [2024-07-12 20:41:07.565094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.581 [2024-07-12 20:41:07.565219] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:13.581 [2024-07-12 20:41:07.565272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:13.581 [2024-07-12 20:41:07.565291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:13.581 [2024-07-12 20:41:07.565377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:13.581 [2024-07-12 20:41:07.565421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.581 [2024-07-12 20:41:07.565453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:13.581 [2024-07-12 20:41:07.565477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:13.581 [2024-07-12 20:41:07.565487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.581 [2024-07-12 20:41:07.565502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:13.581 [2024-07-12 20:41:07.565520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:13.581 [2024-07-12 20:41:07.565533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:13.581 [2024-07-12 20:41:07.565554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:13.581 [2024-07-12 20:41:07.565586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:13.581 [2024-07-12 20:41:07.565617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:13.581 [2024-07-12 20:41:07.565663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:13.581 [2024-07-12 20:41:07.565695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:13.581 [2024-07-12 20:41:07.565741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.581 [2024-07-12 20:41:07.565775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:13.581 [2024-07-12 20:41:07.565785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:13.581 [2024-07-12 20:41:07.565796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.581 [2024-07-12 20:41:07.565807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:13.581 [2024-07-12 20:41:07.565818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:13.581 [2024-07-12 20:41:07.565829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:13.581 [2024-07-12 20:41:07.565852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:13.581 [2024-07-12 20:41:07.565863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565872] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:13.581 [2024-07-12 20:41:07.565883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:13.581 [2024-07-12 20:41:07.565904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.581 [2024-07-12 20:41:07.565926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:13.581 [2024-07-12 20:41:07.565937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:13.581 [2024-07-12 20:41:07.565947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:13.581 [2024-07-12 20:41:07.565957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:13.581 [2024-07-12 20:41:07.565966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:13.581 [2024-07-12 20:41:07.565977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:13.581 [2024-07-12 20:41:07.565988] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:13.581 [2024-07-12 20:41:07.566006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.581 [2024-07-12 20:41:07.566019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:13.581 [2024-07-12 20:41:07.566030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:13.581 [2024-07-12 20:41:07.566044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:13.581 [2024-07-12 20:41:07.566055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:13.581 [2024-07-12 20:41:07.566066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:13.581 [2024-07-12 20:41:07.566077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:13.581 [2024-07-12 20:41:07.566088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:13.581 [2024-07-12 20:41:07.566098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:13.581 [2024-07-12 20:41:07.566109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:13.581 [2024-07-12 20:41:07.566120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:13.581 [2024-07-12 20:41:07.566131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:13.581 [2024-07-12 20:41:07.566142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:13.581 [2024-07-12 20:41:07.566153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:13.581 [2024-07-12 20:41:07.566164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:13.581 [2024-07-12 20:41:07.566175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:13.581 [2024-07-12 20:41:07.566188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.581 [2024-07-12 20:41:07.566200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:13.581 [2024-07-12 20:41:07.566212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:13.581 [2024-07-12 20:41:07.566226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:13.581 [2024-07-12 20:41:07.566238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:13.581 [2024-07-12 20:41:07.566250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.581 [2024-07-12 20:41:07.566261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:13.581 [2024-07-12 20:41:07.566272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:24:13.581 [2024-07-12 20:41:07.566298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.581 [2024-07-12 20:41:07.608301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.581 [2024-07-12 20:41:07.608387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.582 [2024-07-12 20:41:07.608416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.854 ms 00:24:13.582 [2024-07-12 20:41:07.608463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.608646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.608671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:13.582 [2024-07-12 20:41:07.608690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:13.582 [2024-07-12 20:41:07.608706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.623062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.623109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.582 [2024-07-12 20:41:07.623133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.233 ms 00:24:13.582 [2024-07-12 20:41:07.623144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.623195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.623212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.582 [2024-07-12 20:41:07.623225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:13.582 [2024-07-12 20:41:07.623301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.623987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.624038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.582 [2024-07-12 20:41:07.624069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:24:13.582 [2024-07-12 20:41:07.624093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.624320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.624360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.582 [2024-07-12 20:41:07.624375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:24:13.582 [2024-07-12 20:41:07.624387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.632482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.632521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.582 [2024-07-12 20:41:07.632537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.061 ms 00:24:13.582 [2024-07-12 20:41:07.632563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.635875] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:13.582 [2024-07-12 20:41:07.635912] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:13.582 [2024-07-12 20:41:07.635929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.635941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:13.582 [2024-07-12 20:41:07.635953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:24:13.582 [2024-07-12 20:41:07.635963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.652060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.652115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:13.582 [2024-07-12 20:41:07.652134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.019 ms 00:24:13.582 [2024-07-12 20:41:07.652153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.654845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.654882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:13.582 [2024-07-12 20:41:07.654898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:24:13.582 [2024-07-12 20:41:07.654910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.656600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.656637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:13.582 [2024-07-12 20:41:07.656654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:24:13.582 [2024-07-12 20:41:07.656665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.657204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.657258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:13.582 [2024-07-12 20:41:07.657276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:24:13.582 [2024-07-12 20:41:07.657287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.679956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.680023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:13.582 [2024-07-12 20:41:07.680044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.640 ms 00:24:13.582 [2024-07-12 20:41:07.680069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.688988] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:13.582 [2024-07-12 20:41:07.693580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.693621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:13.582 [2024-07-12 20:41:07.693641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.386 ms 00:24:13.582 [2024-07-12 20:41:07.693667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.693808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.693844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:13.582 [2024-07-12 20:41:07.693859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:13.582 [2024-07-12 20:41:07.693882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.694002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.694022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:13.582 [2024-07-12 20:41:07.694036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:13.582 [2024-07-12 20:41:07.694048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.694083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.694098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:13.582 [2024-07-12 20:41:07.694118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.582 [2024-07-12 20:41:07.694140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.694189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:13.582 [2024-07-12 20:41:07.694207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.694218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:13.582 [2024-07-12 20:41:07.694230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:13.582 [2024-07-12 20:41:07.694257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.698914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.698956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:13.582 [2024-07-12 20:41:07.698983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:24:13.582 [2024-07-12 20:41:07.698995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.699087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.582 [2024-07-12 20:41:07.699110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:13.582 [2024-07-12 20:41:07.699129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:13.582 [2024-07-12 20:41:07.699142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.582 [2024-07-12 20:41:07.700629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.134 ms, result 0 00:24:55.643  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 96/1024 [MB] (23 MBps) Copying: 120/1024 [MB] (23 MBps) Copying: 143/1024 [MB] (23 MBps) Copying: 168/1024 [MB] (24 MBps) Copying: 193/1024 [MB] (25 MBps) Copying: 220/1024 [MB] (26 MBps) Copying: 246/1024 [MB] (26 MBps) Copying: 271/1024 [MB] (25 MBps) Copying: 298/1024 [MB] (26 MBps) Copying: 323/1024 [MB] (24 MBps) Copying: 349/1024 [MB] (25 MBps) Copying: 374/1024 [MB] (25 MBps) Copying: 400/1024 [MB] (25 MBps) Copying: 424/1024 [MB] (24 MBps) Copying: 448/1024 [MB] (23 MBps) Copying: 472/1024 [MB] (23 MBps) Copying: 497/1024 [MB] (25 MBps) Copying: 522/1024 [MB] (25 MBps) Copying: 548/1024 [MB] (25 MBps) Copying: 573/1024 [MB] (25 MBps) Copying: 598/1024 [MB] (24 MBps) Copying: 623/1024 [MB] (24 MBps) Copying: 649/1024 [MB] (26 MBps) Copying: 675/1024 [MB] (25 MBps) Copying: 701/1024 [MB] (26 MBps) Copying: 727/1024 [MB] (25 MBps) Copying: 752/1024 [MB] (25 MBps) Copying: 778/1024 [MB] (25 MBps) Copying: 804/1024 [MB] (25 MBps) Copying: 829/1024 [MB] (24 MBps) Copying: 853/1024 [MB] (24 MBps) Copying: 879/1024 [MB] (25 MBps) Copying: 903/1024 [MB] (24 MBps) Copying: 928/1024 [MB] (24 MBps) Copying: 952/1024 [MB] (24 MBps) Copying: 978/1024 [MB] (25 MBps) Copying: 1004/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-12 20:41:49.600702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.600991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.644 [2024-07-12 20:41:49.601147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:55.644 [2024-07-12 20:41:49.601201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.603703] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.644 [2024-07-12 20:41:49.608188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.608374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.644 [2024-07-12 20:41:49.608502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.269 ms 00:24:55.644 [2024-07-12 20:41:49.608650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.620683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.620935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.644 [2024-07-12 20:41:49.621076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.891 ms 00:24:55.644 [2024-07-12 20:41:49.621201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.644470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.644765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.644 [2024-07-12 20:41:49.644892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.159 ms 00:24:55.644 [2024-07-12 20:41:49.644945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.651686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.651843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:55.644 [2024-07-12 20:41:49.651975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.594 ms 00:24:55.644 [2024-07-12 20:41:49.652122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.654183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.654348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.644 [2024-07-12 20:41:49.654460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:24:55.644 [2024-07-12 20:41:49.654575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.658625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.658775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.644 [2024-07-12 20:41:49.658887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.878 ms 00:24:55.644 [2024-07-12 20:41:49.659027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.766919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.767250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:55.644 [2024-07-12 20:41:49.767450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.797 ms 00:24:55.644 [2024-07-12 20:41:49.767493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.770484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.770627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:55.644 [2024-07-12 20:41:49.770737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:24:55.644 [2024-07-12 20:41:49.770850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.772424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.772566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:55.644 [2024-07-12 20:41:49.772676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:24:55.644 [2024-07-12 20:41:49.772806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.774105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.774262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:55.644 [2024-07-12 20:41:49.774305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:24:55.644 [2024-07-12 20:41:49.774317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.775527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.644 [2024-07-12 20:41:49.775561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:55.644 [2024-07-12 20:41:49.775576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:24:55.644 [2024-07-12 20:41:49.775587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.644 [2024-07-12 20:41:49.775623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:55.644 [2024-07-12 20:41:49.775648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:24:55.644 [2024-07-12 20:41:49.775663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.775995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.776924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.777071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.777206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.777315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:55.644 [2024-07-12 20:41:49.777431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.777560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.777759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.777913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.778929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:55.645 [2024-07-12 20:41:49.779528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:55.645 [2024-07-12 20:41:49.779541] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04eac149-743a-445f-850f-492299bcef1c 00:24:55.645 [2024-07-12 20:41:49.779554] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:24:55.645 [2024-07-12 20:41:49.779565] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131264 00:24:55.645 [2024-07-12 20:41:49.779576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:24:55.645 [2024-07-12 20:41:49.779599] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:24:55.645 [2024-07-12 20:41:49.779611] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:55.645 [2024-07-12 20:41:49.779623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:55.645 [2024-07-12 20:41:49.779655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:55.645 [2024-07-12 20:41:49.779665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:55.645 [2024-07-12 20:41:49.779675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:55.645 [2024-07-12 20:41:49.779688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.645 [2024-07-12 20:41:49.779709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:55.645 [2024-07-12 20:41:49.779722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.066 ms 00:24:55.645 [2024-07-12 20:41:49.779733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.645 [2024-07-12 20:41:49.781888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.645 [2024-07-12 20:41:49.781921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:55.645 [2024-07-12 20:41:49.781943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:24:55.645 [2024-07-12 20:41:49.781963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.645 [2024-07-12 20:41:49.782107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.645 [2024-07-12 20:41:49.782122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:55.645 [2024-07-12 20:41:49.782149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:55.645 [2024-07-12 20:41:49.782160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.645 [2024-07-12 20:41:49.789403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.645 [2024-07-12 20:41:49.789477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.645 [2024-07-12 20:41:49.789503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.645 [2024-07-12 20:41:49.789516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.645 [2024-07-12 20:41:49.789606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.645 [2024-07-12 20:41:49.789621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.645 [2024-07-12 20:41:49.789633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.645 [2024-07-12 20:41:49.789644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.645 [2024-07-12 20:41:49.789731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.645 [2024-07-12 20:41:49.789750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.645 [2024-07-12 20:41:49.789763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.645 [2024-07-12 20:41:49.789781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.645 [2024-07-12 20:41:49.789804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.645 [2024-07-12 20:41:49.789818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.645 [2024-07-12 20:41:49.789830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.645 [2024-07-12 20:41:49.789841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.805327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.805404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.904 [2024-07-12 20:41:49.805424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.805450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.815727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.815799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.904 [2024-07-12 20:41:49.815818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.815830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.815905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.815922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.904 [2024-07-12 20:41:49.815934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.815945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.816004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.816019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.904 [2024-07-12 20:41:49.816045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.816056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.816167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.816187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.904 [2024-07-12 20:41:49.816200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.816211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.816279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.816306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:55.904 [2024-07-12 20:41:49.816319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.816340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.816388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.816404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:55.904 [2024-07-12 20:41:49.816417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.816428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.816487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.904 [2024-07-12 20:41:49.816504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:55.904 [2024-07-12 20:41:49.816516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.904 [2024-07-12 20:41:49.816527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.904 [2024-07-12 20:41:49.816671] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 216.906 ms, result 0 00:24:56.470 00:24:56.470 00:24:56.470 20:41:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:58.998 20:41:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:58.998 [2024-07-12 20:41:52.718546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:24:58.998 [2024-07-12 20:41:52.718711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96376 ] 00:24:58.998 [2024-07-12 20:41:52.861990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:58.998 [2024-07-12 20:41:52.883333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.998 [2024-07-12 20:41:52.981929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.998 [2024-07-12 20:41:53.109474] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.998 [2024-07-12 20:41:53.109584] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.256 [2024-07-12 20:41:53.271926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.272016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:59.256 [2024-07-12 20:41:53.272038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:59.256 [2024-07-12 20:41:53.272062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.272151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.272178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.256 [2024-07-12 20:41:53.272196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:59.256 [2024-07-12 20:41:53.272216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.272266] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:59.256 [2024-07-12 20:41:53.272638] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:59.256 [2024-07-12 20:41:53.272676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.272700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.256 [2024-07-12 20:41:53.272712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:24:59.256 [2024-07-12 20:41:53.272723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.274711] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:59.256 [2024-07-12 20:41:53.277661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.277711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:59.256 [2024-07-12 20:41:53.277728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:24:59.256 [2024-07-12 20:41:53.277740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.277814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.277834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:59.256 [2024-07-12 20:41:53.277847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:59.256 [2024-07-12 20:41:53.277857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.286579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.286642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:59.256 [2024-07-12 20:41:53.286676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.638 ms 00:24:59.256 [2024-07-12 20:41:53.286688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.286805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.286830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:59.256 [2024-07-12 20:41:53.286848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:59.256 [2024-07-12 20:41:53.286867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.286973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.286999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:59.256 [2024-07-12 20:41:53.287021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:59.256 [2024-07-12 20:41:53.287041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.287079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:59.256 [2024-07-12 20:41:53.289268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.289305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:59.256 [2024-07-12 20:41:53.289321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.200 ms 00:24:59.256 [2024-07-12 20:41:53.289331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.289389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.289408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:59.256 [2024-07-12 20:41:53.289420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:59.256 [2024-07-12 20:41:53.289445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.289486] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:59.256 [2024-07-12 20:41:53.289531] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:59.256 [2024-07-12 20:41:53.289574] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:59.256 [2024-07-12 20:41:53.289612] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:59.256 [2024-07-12 20:41:53.289719] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:59.256 [2024-07-12 20:41:53.289736] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:59.256 [2024-07-12 20:41:53.289750] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:59.256 [2024-07-12 20:41:53.289765] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:59.256 [2024-07-12 20:41:53.289778] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:59.256 [2024-07-12 20:41:53.289790] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:59.256 [2024-07-12 20:41:53.289801] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:59.256 [2024-07-12 20:41:53.289811] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:59.256 [2024-07-12 20:41:53.289822] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:59.256 [2024-07-12 20:41:53.289833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.289849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:59.256 [2024-07-12 20:41:53.289860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:24:59.256 [2024-07-12 20:41:53.289880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.289976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.256 [2024-07-12 20:41:53.289992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:59.256 [2024-07-12 20:41:53.290008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:59.256 [2024-07-12 20:41:53.290021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.256 [2024-07-12 20:41:53.290137] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:59.256 [2024-07-12 20:41:53.290156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:59.256 [2024-07-12 20:41:53.290186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.256 [2024-07-12 20:41:53.290198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.256 [2024-07-12 20:41:53.290209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:59.256 [2024-07-12 20:41:53.290219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:59.256 [2024-07-12 20:41:53.290229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:59.256 [2024-07-12 20:41:53.290255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:59.256 [2024-07-12 20:41:53.290270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:59.256 [2024-07-12 20:41:53.290280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.256 [2024-07-12 20:41:53.290290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:59.256 [2024-07-12 20:41:53.290301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:59.256 [2024-07-12 20:41:53.290310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.256 [2024-07-12 20:41:53.290320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:59.256 [2024-07-12 20:41:53.290338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:59.257 [2024-07-12 20:41:53.290375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:59.257 [2024-07-12 20:41:53.290410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:59.257 [2024-07-12 20:41:53.290421] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:59.257 [2024-07-12 20:41:53.290442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.257 [2024-07-12 20:41:53.290463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:59.257 [2024-07-12 20:41:53.290473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.257 [2024-07-12 20:41:53.290493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:59.257 [2024-07-12 20:41:53.290503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.257 [2024-07-12 20:41:53.290522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:59.257 [2024-07-12 20:41:53.290532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.257 [2024-07-12 20:41:53.290559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:59.257 [2024-07-12 20:41:53.290569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.257 [2024-07-12 20:41:53.290588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:59.257 [2024-07-12 20:41:53.290599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:59.257 [2024-07-12 20:41:53.290609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.257 [2024-07-12 20:41:53.290619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:59.257 [2024-07-12 20:41:53.290629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:59.257 [2024-07-12 20:41:53.290638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:59.257 [2024-07-12 20:41:53.290658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:59.257 [2024-07-12 20:41:53.290669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290678] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:59.257 [2024-07-12 20:41:53.290689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:59.257 [2024-07-12 20:41:53.290700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.257 [2024-07-12 20:41:53.290714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.257 [2024-07-12 20:41:53.290726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:59.257 [2024-07-12 20:41:53.290737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:59.257 [2024-07-12 20:41:53.290748] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:59.257 [2024-07-12 20:41:53.290758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:59.257 [2024-07-12 20:41:53.290769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:59.257 [2024-07-12 20:41:53.290779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:59.257 [2024-07-12 20:41:53.290791] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:59.257 [2024-07-12 20:41:53.290816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.257 [2024-07-12 20:41:53.290837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:59.257 [2024-07-12 20:41:53.290849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:59.257 [2024-07-12 20:41:53.290860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:59.257 [2024-07-12 20:41:53.290871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:59.257 [2024-07-12 20:41:53.290882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:59.257 [2024-07-12 20:41:53.290893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:59.257 [2024-07-12 20:41:53.290904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:59.257 [2024-07-12 20:41:53.290920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:59.257 [2024-07-12 20:41:53.290931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:59.257 [2024-07-12 20:41:53.290943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:59.257 [2024-07-12 20:41:53.290954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:59.257 [2024-07-12 20:41:53.290965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:59.257 [2024-07-12 20:41:53.290976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:59.257 [2024-07-12 20:41:53.290988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:59.257 [2024-07-12 20:41:53.290998] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:59.257 [2024-07-12 20:41:53.291019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.257 [2024-07-12 20:41:53.291040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:59.257 [2024-07-12 20:41:53.291052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:59.257 [2024-07-12 20:41:53.291063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:59.257 [2024-07-12 20:41:53.291075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:59.257 [2024-07-12 20:41:53.291087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.291105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:59.257 [2024-07-12 20:41:53.291116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:24:59.257 [2024-07-12 20:41:53.291131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.320871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.321217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:59.257 [2024-07-12 20:41:53.321297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.639 ms 00:24:59.257 [2024-07-12 20:41:53.321317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.321485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.321535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:59.257 [2024-07-12 20:41:53.321562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:24:59.257 [2024-07-12 20:41:53.321578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.334699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.334766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:59.257 [2024-07-12 20:41:53.334786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:24:59.257 [2024-07-12 20:41:53.334798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.334870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.334895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:59.257 [2024-07-12 20:41:53.334909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:59.257 [2024-07-12 20:41:53.334921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.335569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.335590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:59.257 [2024-07-12 20:41:53.335609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:59.257 [2024-07-12 20:41:53.335631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.335802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.335823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:59.257 [2024-07-12 20:41:53.335840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:24:59.257 [2024-07-12 20:41:53.335860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.343632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.343695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:59.257 [2024-07-12 20:41:53.343713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.730 ms 00:24:59.257 [2024-07-12 20:41:53.343725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.346919] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:59.257 [2024-07-12 20:41:53.346968] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:59.257 [2024-07-12 20:41:53.346987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.347000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:59.257 [2024-07-12 20:41:53.347019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.112 ms 00:24:59.257 [2024-07-12 20:41:53.347031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.363156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.363235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:59.257 [2024-07-12 20:41:53.363266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.069 ms 00:24:59.257 [2024-07-12 20:41:53.363317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.366621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.366662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:59.257 [2024-07-12 20:41:53.366679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:24:59.257 [2024-07-12 20:41:53.366690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.257 [2024-07-12 20:41:53.368386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.257 [2024-07-12 20:41:53.368423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:59.257 [2024-07-12 20:41:53.368439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:24:59.258 [2024-07-12 20:41:53.368449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.258 [2024-07-12 20:41:53.368976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.258 [2024-07-12 20:41:53.369004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:59.258 [2024-07-12 20:41:53.369028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:24:59.258 [2024-07-12 20:41:53.369051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.258 [2024-07-12 20:41:53.391371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.258 [2024-07-12 20:41:53.391444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:59.258 [2024-07-12 20:41:53.391465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.293 ms 00:24:59.258 [2024-07-12 20:41:53.391477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.258 [2024-07-12 20:41:53.400074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:59.516 [2024-07-12 20:41:53.404465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.516 [2024-07-12 20:41:53.404508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:59.516 [2024-07-12 20:41:53.404540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.914 ms 00:24:59.516 [2024-07-12 20:41:53.404552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.516 [2024-07-12 20:41:53.404681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.516 [2024-07-12 20:41:53.404701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:59.516 [2024-07-12 20:41:53.404723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:59.516 [2024-07-12 20:41:53.404734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.516 [2024-07-12 20:41:53.406920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.516 [2024-07-12 20:41:53.406953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:59.516 [2024-07-12 20:41:53.406972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:24:59.516 [2024-07-12 20:41:53.406984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.516 [2024-07-12 20:41:53.407023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.516 [2024-07-12 20:41:53.407040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:59.516 [2024-07-12 20:41:53.407054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:59.516 [2024-07-12 20:41:53.407065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.516 [2024-07-12 20:41:53.407109] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:59.516 [2024-07-12 20:41:53.407131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.516 [2024-07-12 20:41:53.407142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:59.516 [2024-07-12 20:41:53.407155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:59.516 [2024-07-12 20:41:53.407166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.516 [2024-07-12 20:41:53.411516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.516 [2024-07-12 20:41:53.411568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:59.516 [2024-07-12 20:41:53.411586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.313 ms 00:24:59.516 [2024-07-12 20:41:53.411598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.516 [2024-07-12 20:41:53.411689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.516 [2024-07-12 20:41:53.411715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:59.516 [2024-07-12 20:41:53.411728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:59.516 [2024-07-12 20:41:53.411738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.516 [2024-07-12 20:41:53.419158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.343 ms, result 0 00:25:38.081  Copying: 896/1048576 [kB] (896 kBps) Copying: 4588/1048576 [kB] (3692 kBps) Copying: 26/1024 [MB] (22 MBps) Copying: 56/1024 [MB] (29 MBps) Copying: 85/1024 [MB] (29 MBps) Copying: 114/1024 [MB] (29 MBps) Copying: 144/1024 [MB] (29 MBps) Copying: 172/1024 [MB] (28 MBps) Copying: 201/1024 [MB] (29 MBps) Copying: 230/1024 [MB] (28 MBps) Copying: 259/1024 [MB] (28 MBps) Copying: 287/1024 [MB] (28 MBps) Copying: 317/1024 [MB] (29 MBps) Copying: 346/1024 [MB] (29 MBps) Copying: 375/1024 [MB] (29 MBps) Copying: 404/1024 [MB] (28 MBps) Copying: 433/1024 [MB] (29 MBps) Copying: 462/1024 [MB] (28 MBps) Copying: 492/1024 [MB] (29 MBps) Copying: 521/1024 [MB] (28 MBps) Copying: 550/1024 [MB] (28 MBps) Copying: 578/1024 [MB] (28 MBps) Copying: 607/1024 [MB] (28 MBps) Copying: 637/1024 [MB] (29 MBps) Copying: 665/1024 [MB] (28 MBps) Copying: 694/1024 [MB] (28 MBps) Copying: 722/1024 [MB] (28 MBps) Copying: 750/1024 [MB] (28 MBps) Copying: 779/1024 [MB] (28 MBps) Copying: 807/1024 [MB] (27 MBps) Copying: 835/1024 [MB] (28 MBps) Copying: 861/1024 [MB] (26 MBps) Copying: 889/1024 [MB] (27 MBps) Copying: 917/1024 [MB] (27 MBps) Copying: 945/1024 [MB] (27 MBps) Copying: 973/1024 [MB] (27 MBps) Copying: 1001/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-12 20:42:31.935105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.935311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:38.081 [2024-07-12 20:42:31.935339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:38.081 [2024-07-12 20:42:31.935363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.935421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:38.081 [2024-07-12 20:42:31.936768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.936800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:38.081 [2024-07-12 20:42:31.936815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:25:38.081 [2024-07-12 20:42:31.936828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.937110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.937136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:38.081 [2024-07-12 20:42:31.937151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:25:38.081 [2024-07-12 20:42:31.937163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.949336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.949381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:38.081 [2024-07-12 20:42:31.949400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.142 ms 00:25:38.081 [2024-07-12 20:42:31.949421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.956960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.957009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:38.081 [2024-07-12 20:42:31.957024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.497 ms 00:25:38.081 [2024-07-12 20:42:31.957035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.958369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.958413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:38.081 [2024-07-12 20:42:31.958429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:25:38.081 [2024-07-12 20:42:31.958440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.962648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.962687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:38.081 [2024-07-12 20:42:31.962702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.171 ms 00:25:38.081 [2024-07-12 20:42:31.962714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.966181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.966220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:38.081 [2024-07-12 20:42:31.966236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:25:38.081 [2024-07-12 20:42:31.966263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.968436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.968475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:38.081 [2024-07-12 20:42:31.968504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:25:38.081 [2024-07-12 20:42:31.968514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.970015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.970051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:38.081 [2024-07-12 20:42:31.970065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:25:38.081 [2024-07-12 20:42:31.970076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.971170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.971207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:38.081 [2024-07-12 20:42:31.971221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:25:38.081 [2024-07-12 20:42:31.971232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.972509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.081 [2024-07-12 20:42:31.972544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:38.081 [2024-07-12 20:42:31.972558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:25:38.081 [2024-07-12 20:42:31.972598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.081 [2024-07-12 20:42:31.972634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:38.081 [2024-07-12 20:42:31.972664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:38.081 [2024-07-12 20:42:31.972678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:25:38.081 [2024-07-12 20:42:31.972690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:38.081 [2024-07-12 20:42:31.972906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.972999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-07-12 20:42:31.973807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-07-12 20:42:31.973933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:38.083 [2024-07-12 20:42:31.973944] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04eac149-743a-445f-850f-492299bcef1c 00:25:38.083 [2024-07-12 20:42:31.973956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:25:38.083 [2024-07-12 20:42:31.973966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 00:25:38.083 [2024-07-12 20:42:31.973977] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 00:25:38.083 [2024-07-12 20:42:31.973988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:25:38.083 [2024-07-12 20:42:31.973999] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:38.083 [2024-07-12 20:42:31.974010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:38.083 [2024-07-12 20:42:31.974020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:38.083 [2024-07-12 20:42:31.974030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:38.083 [2024-07-12 20:42:31.974040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:38.083 [2024-07-12 20:42:31.974050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.083 [2024-07-12 20:42:31.974061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:38.083 [2024-07-12 20:42:31.974077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:25:38.083 [2024-07-12 20:42:31.974088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:31.976925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.083 [2024-07-12 20:42:31.976953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:38.083 [2024-07-12 20:42:31.976967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:25:38.083 [2024-07-12 20:42:31.976978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:31.977169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.083 [2024-07-12 20:42:31.977185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:38.083 [2024-07-12 20:42:31.977207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:25:38.083 [2024-07-12 20:42:31.977218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:31.986684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:31.986726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.083 [2024-07-12 20:42:31.986741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:31.986767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:31.986840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:31.986856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.083 [2024-07-12 20:42:31.986868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:31.986878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:31.986957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:31.986985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.083 [2024-07-12 20:42:31.986998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:31.987009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:31.987031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:31.987051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.083 [2024-07-12 20:42:31.987062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:31.987073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.004021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.004091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.083 [2024-07-12 20:42:32.004110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.004122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.018009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.018073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.083 [2024-07-12 20:42:32.018090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.018102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.018197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.018214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.083 [2024-07-12 20:42:32.018227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.018238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.018308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.018348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.083 [2024-07-12 20:42:32.018366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.018380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.018496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.018523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.083 [2024-07-12 20:42:32.018535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.018546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.018607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.018624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:38.083 [2024-07-12 20:42:32.018637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.018655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.018708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.018732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.083 [2024-07-12 20:42:32.018747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.018758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.018817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-07-12 20:42:32.018845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.083 [2024-07-12 20:42:32.018869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-07-12 20:42:32.018880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-07-12 20:42:32.019057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.901 ms, result 0 00:25:38.342 00:25:38.342 00:25:38.342 20:42:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:40.874 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:40.874 20:42:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:40.874 [2024-07-12 20:42:34.529479] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:25:40.874 [2024-07-12 20:42:34.529691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96786 ] 00:25:40.874 [2024-07-12 20:42:34.683023] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:40.874 [2024-07-12 20:42:34.706912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.874 [2024-07-12 20:42:34.812924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.874 [2024-07-12 20:42:34.969978] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:40.874 [2024-07-12 20:42:34.970076] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:41.134 [2024-07-12 20:42:35.136077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.136143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:41.134 [2024-07-12 20:42:35.136167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:41.134 [2024-07-12 20:42:35.136195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.136306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.136330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.134 [2024-07-12 20:42:35.136358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:41.134 [2024-07-12 20:42:35.136379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.136414] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:41.134 [2024-07-12 20:42:35.136770] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:41.134 [2024-07-12 20:42:35.136805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.136819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.134 [2024-07-12 20:42:35.136832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:25:41.134 [2024-07-12 20:42:35.136855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.139275] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:41.134 [2024-07-12 20:42:35.142966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.143014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:41.134 [2024-07-12 20:42:35.143040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:25:41.134 [2024-07-12 20:42:35.143057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.143152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.143172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:41.134 [2024-07-12 20:42:35.143186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:41.134 [2024-07-12 20:42:35.143197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.155102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.155154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.134 [2024-07-12 20:42:35.155179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.831 ms 00:25:41.134 [2024-07-12 20:42:35.155204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.155342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.155395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.134 [2024-07-12 20:42:35.155425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:41.134 [2024-07-12 20:42:35.155438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.155533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.155557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:41.134 [2024-07-12 20:42:35.155572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:41.134 [2024-07-12 20:42:35.155585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.155632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:41.134 [2024-07-12 20:42:35.158318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.158379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.134 [2024-07-12 20:42:35.158396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:25:41.134 [2024-07-12 20:42:35.158409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.158473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.158493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:41.134 [2024-07-12 20:42:35.158513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:41.134 [2024-07-12 20:42:35.158535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.158567] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:41.134 [2024-07-12 20:42:35.158600] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:41.134 [2024-07-12 20:42:35.158645] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:41.134 [2024-07-12 20:42:35.158670] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:41.134 [2024-07-12 20:42:35.158772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:41.134 [2024-07-12 20:42:35.158789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:41.134 [2024-07-12 20:42:35.158804] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:41.134 [2024-07-12 20:42:35.158819] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:41.134 [2024-07-12 20:42:35.158833] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:41.134 [2024-07-12 20:42:35.158858] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:41.134 [2024-07-12 20:42:35.158870] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:41.134 [2024-07-12 20:42:35.158891] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:41.134 [2024-07-12 20:42:35.158903] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:41.134 [2024-07-12 20:42:35.158920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.158943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:41.134 [2024-07-12 20:42:35.158957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:25:41.134 [2024-07-12 20:42:35.158968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.159076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.134 [2024-07-12 20:42:35.159092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:41.134 [2024-07-12 20:42:35.159105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:41.134 [2024-07-12 20:42:35.159116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.134 [2024-07-12 20:42:35.159232] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:41.134 [2024-07-12 20:42:35.159262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:41.134 [2024-07-12 20:42:35.159711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.134 [2024-07-12 20:42:35.159758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.134 [2024-07-12 20:42:35.159800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:41.134 [2024-07-12 20:42:35.159927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:41.134 [2024-07-12 20:42:35.159981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:41.134 [2024-07-12 20:42:35.160022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:41.134 [2024-07-12 20:42:35.160150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:41.134 [2024-07-12 20:42:35.160203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.134 [2024-07-12 20:42:35.160260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:41.134 [2024-07-12 20:42:35.160395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:41.134 [2024-07-12 20:42:35.160452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.134 [2024-07-12 20:42:35.160584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:41.134 [2024-07-12 20:42:35.160748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:41.135 [2024-07-12 20:42:35.160786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.135 [2024-07-12 20:42:35.160800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:41.135 [2024-07-12 20:42:35.160813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:41.135 [2024-07-12 20:42:35.160825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.135 [2024-07-12 20:42:35.160837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:41.135 [2024-07-12 20:42:35.160857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:41.135 [2024-07-12 20:42:35.160868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.135 [2024-07-12 20:42:35.160880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:41.135 [2024-07-12 20:42:35.160892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:41.135 [2024-07-12 20:42:35.160911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.135 [2024-07-12 20:42:35.160924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:41.135 [2024-07-12 20:42:35.160937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:41.135 [2024-07-12 20:42:35.160948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.135 [2024-07-12 20:42:35.160959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:41.135 [2024-07-12 20:42:35.160971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:41.135 [2024-07-12 20:42:35.160982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.135 [2024-07-12 20:42:35.160993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:41.135 [2024-07-12 20:42:35.161005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:41.135 [2024-07-12 20:42:35.161017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.135 [2024-07-12 20:42:35.161028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:41.135 [2024-07-12 20:42:35.161040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:41.135 [2024-07-12 20:42:35.161052] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.135 [2024-07-12 20:42:35.161063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:41.135 [2024-07-12 20:42:35.161076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:41.135 [2024-07-12 20:42:35.161088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.135 [2024-07-12 20:42:35.161103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:41.135 [2024-07-12 20:42:35.161116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:41.135 [2024-07-12 20:42:35.161128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.135 [2024-07-12 20:42:35.161139] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:41.135 [2024-07-12 20:42:35.161151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:41.135 [2024-07-12 20:42:35.161163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.135 [2024-07-12 20:42:35.161186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.135 [2024-07-12 20:42:35.161200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:41.135 [2024-07-12 20:42:35.161212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:41.135 [2024-07-12 20:42:35.161223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:41.135 [2024-07-12 20:42:35.161236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:41.135 [2024-07-12 20:42:35.161270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:41.135 [2024-07-12 20:42:35.161283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:41.135 [2024-07-12 20:42:35.161298] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:41.135 [2024-07-12 20:42:35.161326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.135 [2024-07-12 20:42:35.161340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:41.135 [2024-07-12 20:42:35.161357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:41.135 [2024-07-12 20:42:35.161371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:41.135 [2024-07-12 20:42:35.161384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:41.135 [2024-07-12 20:42:35.161396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:41.135 [2024-07-12 20:42:35.161408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:41.135 [2024-07-12 20:42:35.161420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:41.135 [2024-07-12 20:42:35.161433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:41.135 [2024-07-12 20:42:35.161445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:41.135 [2024-07-12 20:42:35.161457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:41.135 [2024-07-12 20:42:35.161468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:41.135 [2024-07-12 20:42:35.161481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:41.135 [2024-07-12 20:42:35.161493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:41.135 [2024-07-12 20:42:35.161506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:41.135 [2024-07-12 20:42:35.161518] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:41.135 [2024-07-12 20:42:35.161535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.135 [2024-07-12 20:42:35.161548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:41.135 [2024-07-12 20:42:35.161565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:41.135 [2024-07-12 20:42:35.161579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:41.135 [2024-07-12 20:42:35.161592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:41.135 [2024-07-12 20:42:35.161606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.161632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:41.135 [2024-07-12 20:42:35.161647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:25:41.135 [2024-07-12 20:42:35.161659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.193850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.194193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.135 [2024-07-12 20:42:35.194394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.074 ms 00:25:41.135 [2024-07-12 20:42:35.194591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.194836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.194909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:41.135 [2024-07-12 20:42:35.195063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:41.135 [2024-07-12 20:42:35.195200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.211852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.212080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.135 [2024-07-12 20:42:35.212202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.345 ms 00:25:41.135 [2024-07-12 20:42:35.212349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.212466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.212595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.135 [2024-07-12 20:42:35.212656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:41.135 [2024-07-12 20:42:35.212757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.213720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.213874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.135 [2024-07-12 20:42:35.213992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:25:41.135 [2024-07-12 20:42:35.214042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.214392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.214545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.135 [2024-07-12 20:42:35.214682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:25:41.135 [2024-07-12 20:42:35.214738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.224928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.225116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.135 [2024-07-12 20:42:35.225145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.058 ms 00:25:41.135 [2024-07-12 20:42:35.225159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.229080] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:41.135 [2024-07-12 20:42:35.229125] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:41.135 [2024-07-12 20:42:35.229145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.229157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:41.135 [2024-07-12 20:42:35.229170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.800 ms 00:25:41.135 [2024-07-12 20:42:35.229181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.245152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.245193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:41.135 [2024-07-12 20:42:35.245227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.924 ms 00:25:41.135 [2024-07-12 20:42:35.245257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.247194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.135 [2024-07-12 20:42:35.247235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:41.135 [2024-07-12 20:42:35.247266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:25:41.135 [2024-07-12 20:42:35.247278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.135 [2024-07-12 20:42:35.249034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.136 [2024-07-12 20:42:35.249074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:41.136 [2024-07-12 20:42:35.249101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:25:41.136 [2024-07-12 20:42:35.249115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.136 [2024-07-12 20:42:35.249520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.136 [2024-07-12 20:42:35.249551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:41.136 [2024-07-12 20:42:35.249572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:25:41.136 [2024-07-12 20:42:35.249585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.136 [2024-07-12 20:42:35.278341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.136 [2024-07-12 20:42:35.278428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:41.136 [2024-07-12 20:42:35.278470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.728 ms 00:25:41.136 [2024-07-12 20:42:35.278484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.286953] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:41.395 [2024-07-12 20:42:35.291937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.395 [2024-07-12 20:42:35.292108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:41.395 [2024-07-12 20:42:35.292139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.361 ms 00:25:41.395 [2024-07-12 20:42:35.292154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.292305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.395 [2024-07-12 20:42:35.292329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:41.395 [2024-07-12 20:42:35.292352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:41.395 [2024-07-12 20:42:35.292365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.293748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.395 [2024-07-12 20:42:35.293785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:41.395 [2024-07-12 20:42:35.293801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:25:41.395 [2024-07-12 20:42:35.293814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.293854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.395 [2024-07-12 20:42:35.293870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:41.395 [2024-07-12 20:42:35.293898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:41.395 [2024-07-12 20:42:35.293920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.293985] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:41.395 [2024-07-12 20:42:35.294021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.395 [2024-07-12 20:42:35.294035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:41.395 [2024-07-12 20:42:35.294048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:41.395 [2024-07-12 20:42:35.294060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.299062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.395 [2024-07-12 20:42:35.299107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:41.395 [2024-07-12 20:42:35.299126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.970 ms 00:25:41.395 [2024-07-12 20:42:35.299139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.299259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.395 [2024-07-12 20:42:35.299289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:41.395 [2024-07-12 20:42:35.299304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:41.395 [2024-07-12 20:42:35.299317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.395 [2024-07-12 20:42:35.300865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 164.224 ms, result 0 00:26:22.561  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (24 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 123/1024 [MB] (24 MBps) Copying: 148/1024 [MB] (24 MBps) Copying: 172/1024 [MB] (24 MBps) Copying: 197/1024 [MB] (24 MBps) Copying: 221/1024 [MB] (23 MBps) Copying: 246/1024 [MB] (25 MBps) Copying: 272/1024 [MB] (26 MBps) Copying: 299/1024 [MB] (26 MBps) Copying: 324/1024 [MB] (25 MBps) Copying: 349/1024 [MB] (25 MBps) Copying: 375/1024 [MB] (25 MBps) Copying: 399/1024 [MB] (24 MBps) Copying: 426/1024 [MB] (26 MBps) Copying: 452/1024 [MB] (26 MBps) Copying: 478/1024 [MB] (26 MBps) Copying: 502/1024 [MB] (23 MBps) Copying: 527/1024 [MB] (25 MBps) Copying: 553/1024 [MB] (25 MBps) Copying: 579/1024 [MB] (25 MBps) Copying: 604/1024 [MB] (25 MBps) Copying: 630/1024 [MB] (25 MBps) Copying: 655/1024 [MB] (24 MBps) Copying: 680/1024 [MB] (25 MBps) Copying: 706/1024 [MB] (26 MBps) Copying: 732/1024 [MB] (25 MBps) Copying: 758/1024 [MB] (25 MBps) Copying: 782/1024 [MB] (24 MBps) Copying: 807/1024 [MB] (24 MBps) Copying: 832/1024 [MB] (25 MBps) Copying: 856/1024 [MB] (23 MBps) Copying: 881/1024 [MB] (24 MBps) Copying: 906/1024 [MB] (25 MBps) Copying: 929/1024 [MB] (23 MBps) Copying: 953/1024 [MB] (23 MBps) Copying: 977/1024 [MB] (24 MBps) Copying: 1002/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-12 20:43:16.496114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.496214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:22.561 [2024-07-12 20:43:16.496270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:22.561 [2024-07-12 20:43:16.496299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.496336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:22.561 [2024-07-12 20:43:16.497508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.497535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:22.561 [2024-07-12 20:43:16.497551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:26:22.561 [2024-07-12 20:43:16.497563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.497846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.497879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:22.561 [2024-07-12 20:43:16.497894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:26:22.561 [2024-07-12 20:43:16.497907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.501859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.502018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:22.561 [2024-07-12 20:43:16.502143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.929 ms 00:26:22.561 [2024-07-12 20:43:16.502211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.509651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.509852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:22.561 [2024-07-12 20:43:16.510004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.060 ms 00:26:22.561 [2024-07-12 20:43:16.510059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.511779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.511819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:22.561 [2024-07-12 20:43:16.511837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:26:22.561 [2024-07-12 20:43:16.511849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.516356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.516397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:22.561 [2024-07-12 20:43:16.516424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.463 ms 00:26:22.561 [2024-07-12 20:43:16.516436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.520047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.561 [2024-07-12 20:43:16.520112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:22.561 [2024-07-12 20:43:16.520131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.534 ms 00:26:22.561 [2024-07-12 20:43:16.520157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.561 [2024-07-12 20:43:16.521999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.562 [2024-07-12 20:43:16.522052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:22.562 [2024-07-12 20:43:16.522069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 00:26:22.562 [2024-07-12 20:43:16.522080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.562 [2024-07-12 20:43:16.523613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.562 [2024-07-12 20:43:16.523651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:22.562 [2024-07-12 20:43:16.523667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:26:22.562 [2024-07-12 20:43:16.523680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.562 [2024-07-12 20:43:16.524884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.562 [2024-07-12 20:43:16.524926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:22.562 [2024-07-12 20:43:16.524943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:26:22.562 [2024-07-12 20:43:16.524954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.562 [2024-07-12 20:43:16.526242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.562 [2024-07-12 20:43:16.526294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:22.562 [2024-07-12 20:43:16.526336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:26:22.562 [2024-07-12 20:43:16.526356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.562 [2024-07-12 20:43:16.526396] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:22.562 [2024-07-12 20:43:16.526420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:22.562 [2024-07-12 20:43:16.526437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:22.562 [2024-07-12 20:43:16.526451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.526992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.527991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.528931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.529937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.530067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.530138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.530281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:22.562 [2024-07-12 20:43:16.530349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:22.563 [2024-07-12 20:43:16.530701] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:22.563 [2024-07-12 20:43:16.530714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 04eac149-743a-445f-850f-492299bcef1c 00:26:22.563 [2024-07-12 20:43:16.530728] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:22.563 [2024-07-12 20:43:16.530740] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:22.563 [2024-07-12 20:43:16.530751] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:22.563 [2024-07-12 20:43:16.530763] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:22.563 [2024-07-12 20:43:16.530785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:22.563 [2024-07-12 20:43:16.530798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:22.563 [2024-07-12 20:43:16.530810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:22.563 [2024-07-12 20:43:16.530826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:22.563 [2024-07-12 20:43:16.530836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:22.563 [2024-07-12 20:43:16.530850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.563 [2024-07-12 20:43:16.530863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:22.563 [2024-07-12 20:43:16.530877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.457 ms 00:26:22.563 [2024-07-12 20:43:16.530889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.533769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.563 [2024-07-12 20:43:16.533907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:22.563 [2024-07-12 20:43:16.533987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:26:22.563 [2024-07-12 20:43:16.534030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.534260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.563 [2024-07-12 20:43:16.534335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:22.563 [2024-07-12 20:43:16.534384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:26:22.563 [2024-07-12 20:43:16.534424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.543744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.543922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:22.563 [2024-07-12 20:43:16.544055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.544108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.544228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.544334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:22.563 [2024-07-12 20:43:16.544466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.544594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.544732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.544811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:22.563 [2024-07-12 20:43:16.544941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.545001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.545124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.545181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:22.563 [2024-07-12 20:43:16.545229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.545290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.564897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.565215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:22.563 [2024-07-12 20:43:16.565377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.565566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.579226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.579508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:22.563 [2024-07-12 20:43:16.579661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.579792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.579923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.580058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:22.563 [2024-07-12 20:43:16.580174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.580345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.580449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.580510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:22.563 [2024-07-12 20:43:16.580624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.580679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.580946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.580979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:22.563 [2024-07-12 20:43:16.580996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.581009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.581073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.581093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:22.563 [2024-07-12 20:43:16.581107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.581120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.581185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.581204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:22.563 [2024-07-12 20:43:16.581219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.581231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.581333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.563 [2024-07-12 20:43:16.581354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:22.563 [2024-07-12 20:43:16.581378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.563 [2024-07-12 20:43:16.581391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.563 [2024-07-12 20:43:16.581613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.459 ms, result 0 00:26:22.821 00:26:22.821 00:26:22.821 20:43:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:25.350 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:25.350 Process with pid 94962 is not found 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 94962 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 94962 ']' 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 94962 00:26:25.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (94962) - No such process 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 94962 is not found' 00:26:25.350 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:25.918 Remove shared memory files 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:25.918 ************************************ 00:26:25.918 END TEST ftl_dirty_shutdown 00:26:25.918 ************************************ 00:26:25.918 00:26:25.918 real 3m40.425s 00:26:25.918 user 4m9.837s 00:26:25.918 sys 0m37.460s 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.918 20:43:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:25.918 20:43:19 ftl -- common/autotest_common.sh@1142 -- # return 0 00:26:25.918 20:43:19 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:25.918 20:43:19 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:25.918 20:43:19 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.918 20:43:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:25.918 ************************************ 00:26:25.918 START TEST ftl_upgrade_shutdown 00:26:25.918 ************************************ 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:25.918 * Looking for test storage... 00:26:25.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.918 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=97296 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 97296 00:26:25.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 97296 ']' 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.919 20:43:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:26.177 [2024-07-12 20:43:20.107918] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:26.177 [2024-07-12 20:43:20.108124] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97296 ] 00:26:26.178 [2024-07-12 20:43:20.276685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:26.178 [2024-07-12 20:43:20.294244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.435 [2024-07-12 20:43:20.425250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:27.001 20:43:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:27.259 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:27.516 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:27.516 { 00:26:27.516 "name": "basen1", 00:26:27.516 "aliases": [ 00:26:27.516 "9a685bbe-af2b-4ad4-9f7d-23bafcce84ef" 00:26:27.516 ], 00:26:27.516 "product_name": "NVMe disk", 00:26:27.516 "block_size": 4096, 00:26:27.516 "num_blocks": 1310720, 00:26:27.516 "uuid": "9a685bbe-af2b-4ad4-9f7d-23bafcce84ef", 00:26:27.516 "assigned_rate_limits": { 00:26:27.516 "rw_ios_per_sec": 0, 00:26:27.516 "rw_mbytes_per_sec": 0, 00:26:27.516 "r_mbytes_per_sec": 0, 00:26:27.516 "w_mbytes_per_sec": 0 00:26:27.517 }, 00:26:27.517 "claimed": true, 00:26:27.517 "claim_type": "read_many_write_one", 00:26:27.517 "zoned": false, 00:26:27.517 "supported_io_types": { 00:26:27.517 "read": true, 00:26:27.517 "write": true, 00:26:27.517 "unmap": true, 00:26:27.517 "flush": true, 00:26:27.517 "reset": true, 00:26:27.517 "nvme_admin": true, 00:26:27.517 "nvme_io": true, 00:26:27.517 "nvme_io_md": false, 00:26:27.517 "write_zeroes": true, 00:26:27.517 "zcopy": false, 00:26:27.517 "get_zone_info": false, 00:26:27.517 "zone_management": false, 00:26:27.517 "zone_append": false, 00:26:27.517 "compare": true, 00:26:27.517 "compare_and_write": false, 00:26:27.517 "abort": true, 00:26:27.517 "seek_hole": false, 00:26:27.517 "seek_data": false, 00:26:27.517 "copy": true, 00:26:27.517 "nvme_iov_md": false 00:26:27.517 }, 00:26:27.517 "driver_specific": { 00:26:27.517 "nvme": [ 00:26:27.517 { 00:26:27.517 "pci_address": "0000:00:11.0", 00:26:27.517 "trid": { 00:26:27.517 "trtype": "PCIe", 00:26:27.517 "traddr": "0000:00:11.0" 00:26:27.517 }, 00:26:27.517 "ctrlr_data": { 00:26:27.517 "cntlid": 0, 00:26:27.517 "vendor_id": "0x1b36", 00:26:27.517 "model_number": "QEMU NVMe Ctrl", 00:26:27.517 "serial_number": "12341", 00:26:27.517 "firmware_revision": "8.0.0", 00:26:27.517 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:27.517 "oacs": { 00:26:27.517 "security": 0, 00:26:27.517 "format": 1, 00:26:27.517 "firmware": 0, 00:26:27.517 "ns_manage": 1 00:26:27.517 }, 00:26:27.517 "multi_ctrlr": false, 00:26:27.517 "ana_reporting": false 00:26:27.517 }, 00:26:27.517 "vs": { 00:26:27.517 "nvme_version": "1.4" 00:26:27.517 }, 00:26:27.517 "ns_data": { 00:26:27.517 "id": 1, 00:26:27.517 "can_share": false 00:26:27.517 } 00:26:27.517 } 00:26:27.517 ], 00:26:27.517 "mp_policy": "active_passive" 00:26:27.517 } 00:26:27.517 } 00:26:27.517 ]' 00:26:27.517 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:27.775 20:43:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:28.033 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=3bce1f5a-7b7e-4e56-85c2-b2592a6eb12f 00:26:28.033 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:28.033 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bce1f5a-7b7e-4e56-85c2-b2592a6eb12f 00:26:28.291 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:28.549 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=420a62d6-eaed-4f10-87fe-26ab990ebb11 00:26:28.549 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 420a62d6-eaed-4f10-87fe-26ab990ebb11 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=616cbef6-6ff3-4e77-9dec-62819976fda8 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 616cbef6-6ff3-4e77-9dec-62819976fda8 ]] 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 616cbef6-6ff3-4e77-9dec-62819976fda8 5120 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=616cbef6-6ff3-4e77-9dec-62819976fda8 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 616cbef6-6ff3-4e77-9dec-62819976fda8 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=616cbef6-6ff3-4e77-9dec-62819976fda8 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:28.807 20:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 616cbef6-6ff3-4e77-9dec-62819976fda8 00:26:29.064 20:43:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:29.064 { 00:26:29.064 "name": "616cbef6-6ff3-4e77-9dec-62819976fda8", 00:26:29.064 "aliases": [ 00:26:29.064 "lvs/basen1p0" 00:26:29.064 ], 00:26:29.064 "product_name": "Logical Volume", 00:26:29.064 "block_size": 4096, 00:26:29.064 "num_blocks": 5242880, 00:26:29.064 "uuid": "616cbef6-6ff3-4e77-9dec-62819976fda8", 00:26:29.064 "assigned_rate_limits": { 00:26:29.064 "rw_ios_per_sec": 0, 00:26:29.064 "rw_mbytes_per_sec": 0, 00:26:29.064 "r_mbytes_per_sec": 0, 00:26:29.064 "w_mbytes_per_sec": 0 00:26:29.064 }, 00:26:29.064 "claimed": false, 00:26:29.064 "zoned": false, 00:26:29.064 "supported_io_types": { 00:26:29.064 "read": true, 00:26:29.064 "write": true, 00:26:29.064 "unmap": true, 00:26:29.064 "flush": false, 00:26:29.064 "reset": true, 00:26:29.064 "nvme_admin": false, 00:26:29.064 "nvme_io": false, 00:26:29.064 "nvme_io_md": false, 00:26:29.064 "write_zeroes": true, 00:26:29.064 "zcopy": false, 00:26:29.064 "get_zone_info": false, 00:26:29.064 "zone_management": false, 00:26:29.065 "zone_append": false, 00:26:29.065 "compare": false, 00:26:29.065 "compare_and_write": false, 00:26:29.065 "abort": false, 00:26:29.065 "seek_hole": true, 00:26:29.065 "seek_data": true, 00:26:29.065 "copy": false, 00:26:29.065 "nvme_iov_md": false 00:26:29.065 }, 00:26:29.065 "driver_specific": { 00:26:29.065 "lvol": { 00:26:29.065 "lvol_store_uuid": "420a62d6-eaed-4f10-87fe-26ab990ebb11", 00:26:29.065 "base_bdev": "basen1", 00:26:29.065 "thin_provision": true, 00:26:29.065 "num_allocated_clusters": 0, 00:26:29.065 "snapshot": false, 00:26:29.065 "clone": false, 00:26:29.065 "esnap_clone": false 00:26:29.065 } 00:26:29.065 } 00:26:29.065 } 00:26:29.065 ]' 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:29.065 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:29.322 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:29.322 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:29.322 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:29.580 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:29.580 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:29.580 20:43:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 616cbef6-6ff3-4e77-9dec-62819976fda8 -c cachen1p0 --l2p_dram_limit 2 00:26:29.838 [2024-07-12 20:43:23.965097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.965211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:29.839 [2024-07-12 20:43:23.965234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:29.839 [2024-07-12 20:43:23.965271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.965379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.965404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:29.839 [2024-07-12 20:43:23.965419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:26:29.839 [2024-07-12 20:43:23.965437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.965466] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:29.839 [2024-07-12 20:43:23.965857] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:29.839 [2024-07-12 20:43:23.965891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.965908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:29.839 [2024-07-12 20:43:23.965922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:26:29.839 [2024-07-12 20:43:23.965936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.966051] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 81f6ab7e-04f4-4020-ac3e-dfc4e2c58e3c 00:26:29.839 [2024-07-12 20:43:23.968502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.968545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:29.839 [2024-07-12 20:43:23.968566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:29.839 [2024-07-12 20:43:23.968580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.981714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.981779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:29.839 [2024-07-12 20:43:23.981809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.065 ms 00:26:29.839 [2024-07-12 20:43:23.981822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.981920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.981946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:29.839 [2024-07-12 20:43:23.981965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:29.839 [2024-07-12 20:43:23.981977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.982104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.982123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:29.839 [2024-07-12 20:43:23.982139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:29.839 [2024-07-12 20:43:23.982151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.982193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:29.839 [2024-07-12 20:43:23.985036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.985078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:29.839 [2024-07-12 20:43:23.985096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.858 ms 00:26:29.839 [2024-07-12 20:43:23.985113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.985156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.985175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:29.839 [2024-07-12 20:43:23.985188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:29.839 [2024-07-12 20:43:23.985206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.985235] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:29.839 [2024-07-12 20:43:23.985430] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:29.839 [2024-07-12 20:43:23.985450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:29.839 [2024-07-12 20:43:23.985470] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:29.839 [2024-07-12 20:43:23.985487] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:29.839 [2024-07-12 20:43:23.985503] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:29.839 [2024-07-12 20:43:23.985537] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:29.839 [2024-07-12 20:43:23.985551] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:29.839 [2024-07-12 20:43:23.985562] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:29.839 [2024-07-12 20:43:23.985594] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:29.839 [2024-07-12 20:43:23.985607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.985621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:29.839 [2024-07-12 20:43:23.985634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:26:29.839 [2024-07-12 20:43:23.985650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.985743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.839 [2024-07-12 20:43:23.985764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:29.839 [2024-07-12 20:43:23.985776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:26:29.839 [2024-07-12 20:43:23.985796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.839 [2024-07-12 20:43:23.985911] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:29.839 [2024-07-12 20:43:23.985946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:29.839 [2024-07-12 20:43:23.985959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:29.839 [2024-07-12 20:43:23.985976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.985992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:29.839 [2024-07-12 20:43:23.986006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:29.839 [2024-07-12 20:43:23.986033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:29.839 [2024-07-12 20:43:23.986045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:29.839 [2024-07-12 20:43:23.986059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:29.839 [2024-07-12 20:43:23.986084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:29.839 [2024-07-12 20:43:23.986094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:29.839 [2024-07-12 20:43:23.986121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:29.839 [2024-07-12 20:43:23.986135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:29.839 [2024-07-12 20:43:23.986159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:29.839 [2024-07-12 20:43:23.986169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:29.839 [2024-07-12 20:43:23.986194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:29.839 [2024-07-12 20:43:23.986208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.839 [2024-07-12 20:43:23.986219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:29.839 [2024-07-12 20:43:23.986233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:29.839 [2024-07-12 20:43:23.986244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.839 [2024-07-12 20:43:23.986258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:29.839 [2024-07-12 20:43:23.986283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:29.839 [2024-07-12 20:43:23.986302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.839 [2024-07-12 20:43:23.986314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:29.839 [2024-07-12 20:43:23.986330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:29.839 [2024-07-12 20:43:23.986341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.839 [2024-07-12 20:43:23.986354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:29.839 [2024-07-12 20:43:23.986365] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:29.839 [2024-07-12 20:43:23.986381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:29.839 [2024-07-12 20:43:23.986407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:29.839 [2024-07-12 20:43:23.986418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:29.839 [2024-07-12 20:43:23.986442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:29.839 [2024-07-12 20:43:23.986512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:29.839 [2024-07-12 20:43:23.986523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986536] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:29.839 [2024-07-12 20:43:23.986558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:29.839 [2024-07-12 20:43:23.986576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:29.839 [2024-07-12 20:43:23.986588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.839 [2024-07-12 20:43:23.986606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:29.839 [2024-07-12 20:43:23.986617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:29.839 [2024-07-12 20:43:23.986631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:30.099 [2024-07-12 20:43:23.986642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:30.099 [2024-07-12 20:43:23.986656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:30.099 [2024-07-12 20:43:23.986667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:30.099 [2024-07-12 20:43:23.986686] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:30.099 [2024-07-12 20:43:23.986710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:30.099 [2024-07-12 20:43:23.986749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:30.099 [2024-07-12 20:43:23.986793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:30.099 [2024-07-12 20:43:23.986806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:30.099 [2024-07-12 20:43:23.986823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:30.099 [2024-07-12 20:43:23.986835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:30.099 [2024-07-12 20:43:23.986928] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:30.099 [2024-07-12 20:43:23.986942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:30.099 [2024-07-12 20:43:23.986970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:30.099 [2024-07-12 20:43:23.986984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:30.099 [2024-07-12 20:43:23.986996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:30.099 [2024-07-12 20:43:23.987012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.099 [2024-07-12 20:43:23.987024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:30.099 [2024-07-12 20:43:23.987044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.162 ms 00:26:30.099 [2024-07-12 20:43:23.987057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.099 [2024-07-12 20:43:23.987151] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:30.099 [2024-07-12 20:43:23.987179] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:33.421 [2024-07-12 20:43:26.893920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.894270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:33.421 [2024-07-12 20:43:26.894310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2906.753 ms 00:26:33.421 [2024-07-12 20:43:26.894336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.421 [2024-07-12 20:43:26.913006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.913064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:33.421 [2024-07-12 20:43:26.913100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.542 ms 00:26:33.421 [2024-07-12 20:43:26.913116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.421 [2024-07-12 20:43:26.913227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.913265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:33.421 [2024-07-12 20:43:26.913283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:33.421 [2024-07-12 20:43:26.913295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.421 [2024-07-12 20:43:26.930431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.930532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:33.421 [2024-07-12 20:43:26.930554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.070 ms 00:26:33.421 [2024-07-12 20:43:26.930570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.421 [2024-07-12 20:43:26.930628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.930643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:33.421 [2024-07-12 20:43:26.930659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:33.421 [2024-07-12 20:43:26.930670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.421 [2024-07-12 20:43:26.931564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.931590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:33.421 [2024-07-12 20:43:26.931635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.783 ms 00:26:33.421 [2024-07-12 20:43:26.931648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.421 [2024-07-12 20:43:26.931732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.931763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:33.421 [2024-07-12 20:43:26.931797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:26:33.421 [2024-07-12 20:43:26.931809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.421 [2024-07-12 20:43:26.945629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.421 [2024-07-12 20:43:26.945680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:33.421 [2024-07-12 20:43:26.945701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.786 ms 00:26:33.421 [2024-07-12 20:43:26.945714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:26.957168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:33.422 [2024-07-12 20:43:26.959143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:26.959181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:33.422 [2024-07-12 20:43:26.959199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.310 ms 00:26:33.422 [2024-07-12 20:43:26.959216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:26.981931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:26.981994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:33.422 [2024-07-12 20:43:26.982013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.667 ms 00:26:33.422 [2024-07-12 20:43:26.982031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:26.982165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:26.982189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:33.422 [2024-07-12 20:43:26.982203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.089 ms 00:26:33.422 [2024-07-12 20:43:26.982217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:26.985470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:26.985515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:33.422 [2024-07-12 20:43:26.985535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.192 ms 00:26:33.422 [2024-07-12 20:43:26.985550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:26.988479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:26.988522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:33.422 [2024-07-12 20:43:26.988538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.886 ms 00:26:33.422 [2024-07-12 20:43:26.988552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:26.988997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:26.989073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:33.422 [2024-07-12 20:43:26.989089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:26:33.422 [2024-07-12 20:43:26.989107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:27.031118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:27.031206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:33.422 [2024-07-12 20:43:27.031233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.972 ms 00:26:33.422 [2024-07-12 20:43:27.031269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:27.036882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:27.036929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:33.422 [2024-07-12 20:43:27.036947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.528 ms 00:26:33.422 [2024-07-12 20:43:27.036962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:27.040663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:27.040709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:33.422 [2024-07-12 20:43:27.040726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.641 ms 00:26:33.422 [2024-07-12 20:43:27.040740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:27.044645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:27.044692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:33.422 [2024-07-12 20:43:27.044709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.846 ms 00:26:33.422 [2024-07-12 20:43:27.044727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:27.044779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:27.044802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:33.422 [2024-07-12 20:43:27.044816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:33.422 [2024-07-12 20:43:27.044830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:27.044939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:33.422 [2024-07-12 20:43:27.044961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:33.422 [2024-07-12 20:43:27.044974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:26:33.422 [2024-07-12 20:43:27.045008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:33.422 [2024-07-12 20:43:27.046631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3080.962 ms, result 0 00:26:33.422 { 00:26:33.422 "name": "ftl", 00:26:33.422 "uuid": "81f6ab7e-04f4-4020-ac3e-dfc4e2c58e3c" 00:26:33.422 } 00:26:33.422 20:43:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:33.422 [2024-07-12 20:43:27.356588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.422 20:43:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:33.680 20:43:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:33.938 [2024-07-12 20:43:27.942034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:33.938 20:43:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:34.195 [2024-07-12 20:43:28.170493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:34.195 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:34.454 Fill FTL, iteration 1 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=97422 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 97422 /var/tmp/spdk.tgt.sock 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 97422 ']' 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:34.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.454 20:43:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:34.713 [2024-07-12 20:43:28.665749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:34.713 [2024-07-12 20:43:28.666195] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97422 ] 00:26:34.713 [2024-07-12 20:43:28.822189] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:34.713 [2024-07-12 20:43:28.846488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.971 [2024-07-12 20:43:28.946362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.538 20:43:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.538 20:43:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:35.538 20:43:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:35.796 ftln1 00:26:35.797 20:43:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:35.797 20:43:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 97422 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 97422 ']' 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 97422 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97422 00:26:36.055 killing process with pid 97422 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97422' 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 97422 00:26:36.055 20:43:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 97422 00:26:36.622 20:43:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:36.622 20:43:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:36.622 [2024-07-12 20:43:30.710998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:36.622 [2024-07-12 20:43:30.711286] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97456 ] 00:26:36.879 [2024-07-12 20:43:30.868727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:36.879 [2024-07-12 20:43:30.889513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.879 [2024-07-12 20:43:30.982287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.291  Copying: 207/1024 [MB] (207 MBps) Copying: 424/1024 [MB] (217 MBps) Copying: 636/1024 [MB] (212 MBps) Copying: 849/1024 [MB] (213 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:26:42.291 00:26:42.291 Calculate MD5 checksum, iteration 1 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:42.291 20:43:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:42.291 [2024-07-12 20:43:36.401417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:42.291 [2024-07-12 20:43:36.401603] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97514 ] 00:26:42.548 [2024-07-12 20:43:36.547659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:42.548 [2024-07-12 20:43:36.573091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.548 [2024-07-12 20:43:36.695059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.474  Copying: 509/1024 [MB] (509 MBps) Copying: 993/1024 [MB] (484 MBps) Copying: 1024/1024 [MB] (average 495 MBps) 00:26:45.474 00:26:45.474 20:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:45.474 20:43:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:48.005 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:48.005 Fill FTL, iteration 2 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=089669d1429e70e271533f328b673fcf 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:48.006 20:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:48.006 [2024-07-12 20:43:41.678076] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:48.006 [2024-07-12 20:43:41.678252] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97570 ] 00:26:48.006 [2024-07-12 20:43:41.825523] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:48.006 [2024-07-12 20:43:41.848585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.006 [2024-07-12 20:43:41.943391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.132  Copying: 219/1024 [MB] (219 MBps) Copying: 428/1024 [MB] (209 MBps) Copying: 642/1024 [MB] (214 MBps) Copying: 853/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:26:53.132 00:26:53.389 Calculate MD5 checksum, iteration 2 00:26:53.389 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:53.389 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:53.389 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:53.389 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:53.389 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:53.389 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:53.390 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:53.390 20:43:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:53.390 [2024-07-12 20:43:47.384619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:53.390 [2024-07-12 20:43:47.385569] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97632 ] 00:26:53.648 [2024-07-12 20:43:47.538225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.648 [2024-07-12 20:43:47.558370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.648 [2024-07-12 20:43:47.647416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.822  Copying: 474/1024 [MB] (474 MBps) Copying: 949/1024 [MB] (475 MBps) Copying: 1024/1024 [MB] (average 474 MBps) 00:26:56.822 00:26:56.822 20:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:56.822 20:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:59.356 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:59.356 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=db7f06b652f231a6851154a3bc52cbbc 00:26:59.356 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:59.356 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:59.356 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:59.356 [2024-07-12 20:43:53.376859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.356 [2024-07-12 20:43:53.376979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:59.356 [2024-07-12 20:43:53.377013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:59.356 [2024-07-12 20:43:53.377024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.356 [2024-07-12 20:43:53.377067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.356 [2024-07-12 20:43:53.377083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:59.356 [2024-07-12 20:43:53.377096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:59.356 [2024-07-12 20:43:53.377107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.356 [2024-07-12 20:43:53.377166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.356 [2024-07-12 20:43:53.377181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:59.356 [2024-07-12 20:43:53.377202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:59.356 [2024-07-12 20:43:53.377214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.356 [2024-07-12 20:43:53.377358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.443 ms, result 0 00:26:59.356 true 00:26:59.356 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:59.614 { 00:26:59.614 "name": "ftl", 00:26:59.614 "properties": [ 00:26:59.614 { 00:26:59.614 "name": "superblock_version", 00:26:59.614 "value": 5, 00:26:59.614 "read-only": true 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "name": "base_device", 00:26:59.614 "bands": [ 00:26:59.614 { 00:26:59.614 "id": 0, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 1, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 2, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 3, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 4, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 5, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 6, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 7, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 8, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 9, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 10, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 11, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 12, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 13, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 14, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 15, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 16, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 17, 00:26:59.614 "state": "FREE", 00:26:59.614 "validity": 0.0 00:26:59.614 } 00:26:59.614 ], 00:26:59.614 "read-only": true 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "name": "cache_device", 00:26:59.614 "type": "bdev", 00:26:59.614 "chunks": [ 00:26:59.614 { 00:26:59.614 "id": 0, 00:26:59.614 "state": "INACTIVE", 00:26:59.614 "utilization": 0.0 00:26:59.614 }, 00:26:59.614 { 00:26:59.614 "id": 1, 00:26:59.614 "state": "CLOSED", 00:26:59.615 "utilization": 1.0 00:26:59.615 }, 00:26:59.615 { 00:26:59.615 "id": 2, 00:26:59.615 "state": "CLOSED", 00:26:59.615 "utilization": 1.0 00:26:59.615 }, 00:26:59.615 { 00:26:59.615 "id": 3, 00:26:59.615 "state": "OPEN", 00:26:59.615 "utilization": 0.001953125 00:26:59.615 }, 00:26:59.615 { 00:26:59.615 "id": 4, 00:26:59.615 "state": "OPEN", 00:26:59.615 "utilization": 0.0 00:26:59.615 } 00:26:59.615 ], 00:26:59.615 "read-only": true 00:26:59.615 }, 00:26:59.615 { 00:26:59.615 "name": "verbose_mode", 00:26:59.615 "value": true, 00:26:59.615 "unit": "", 00:26:59.615 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:59.615 }, 00:26:59.615 { 00:26:59.615 "name": "prep_upgrade_on_shutdown", 00:26:59.615 "value": false, 00:26:59.615 "unit": "", 00:26:59.615 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:59.615 } 00:26:59.615 ] 00:26:59.615 } 00:26:59.615 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:59.873 [2024-07-12 20:43:53.929554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.873 [2024-07-12 20:43:53.929659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:59.873 [2024-07-12 20:43:53.929680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:59.873 [2024-07-12 20:43:53.929698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.873 [2024-07-12 20:43:53.929751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.873 [2024-07-12 20:43:53.929764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:59.873 [2024-07-12 20:43:53.929782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:59.873 [2024-07-12 20:43:53.929793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.873 [2024-07-12 20:43:53.929825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.873 [2024-07-12 20:43:53.929837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:59.873 [2024-07-12 20:43:53.929849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:59.873 [2024-07-12 20:43:53.929859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.873 [2024-07-12 20:43:53.929967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.409 ms, result 0 00:26:59.873 true 00:26:59.873 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:59.873 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:59.873 20:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:00.130 20:43:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:00.130 20:43:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:00.130 20:43:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:00.389 [2024-07-12 20:43:54.446014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.389 [2024-07-12 20:43:54.446121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:00.389 [2024-07-12 20:43:54.446142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:00.389 [2024-07-12 20:43:54.446153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.389 [2024-07-12 20:43:54.446197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.389 [2024-07-12 20:43:54.446212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:00.389 [2024-07-12 20:43:54.446224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:00.389 [2024-07-12 20:43:54.446235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.389 [2024-07-12 20:43:54.446284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.389 [2024-07-12 20:43:54.446298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:00.389 [2024-07-12 20:43:54.446321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:00.389 [2024-07-12 20:43:54.446332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.389 [2024-07-12 20:43:54.446421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.404 ms, result 0 00:27:00.389 true 00:27:00.389 20:43:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:00.649 { 00:27:00.649 "name": "ftl", 00:27:00.649 "properties": [ 00:27:00.649 { 00:27:00.649 "name": "superblock_version", 00:27:00.649 "value": 5, 00:27:00.649 "read-only": true 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "name": "base_device", 00:27:00.649 "bands": [ 00:27:00.649 { 00:27:00.649 "id": 0, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 1, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 2, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 3, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 4, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 5, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 6, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 7, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 8, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 9, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 10, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 11, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 12, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 13, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 14, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 15, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 16, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 17, 00:27:00.649 "state": "FREE", 00:27:00.649 "validity": 0.0 00:27:00.649 } 00:27:00.649 ], 00:27:00.649 "read-only": true 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "name": "cache_device", 00:27:00.649 "type": "bdev", 00:27:00.649 "chunks": [ 00:27:00.649 { 00:27:00.649 "id": 0, 00:27:00.649 "state": "INACTIVE", 00:27:00.649 "utilization": 0.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 1, 00:27:00.649 "state": "CLOSED", 00:27:00.649 "utilization": 1.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 2, 00:27:00.649 "state": "CLOSED", 00:27:00.649 "utilization": 1.0 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 3, 00:27:00.649 "state": "OPEN", 00:27:00.649 "utilization": 0.001953125 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "id": 4, 00:27:00.649 "state": "OPEN", 00:27:00.649 "utilization": 0.0 00:27:00.649 } 00:27:00.649 ], 00:27:00.649 "read-only": true 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "name": "verbose_mode", 00:27:00.649 "value": true, 00:27:00.649 "unit": "", 00:27:00.649 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:00.649 }, 00:27:00.649 { 00:27:00.649 "name": "prep_upgrade_on_shutdown", 00:27:00.649 "value": true, 00:27:00.649 "unit": "", 00:27:00.649 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:00.649 } 00:27:00.649 ] 00:27:00.649 } 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 97296 ]] 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 97296 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 97296 ']' 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 97296 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.649 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97296 00:27:00.649 killing process with pid 97296 00:27:00.650 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:00.650 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:00.650 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97296' 00:27:00.650 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 97296 00:27:00.650 20:43:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 97296 00:27:00.909 [2024-07-12 20:43:55.013976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:00.909 [2024-07-12 20:43:55.020896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.909 [2024-07-12 20:43:55.020932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:00.909 [2024-07-12 20:43:55.020951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:00.909 [2024-07-12 20:43:55.020962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.909 [2024-07-12 20:43:55.020992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:00.909 [2024-07-12 20:43:55.022143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.909 [2024-07-12 20:43:55.022168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:00.909 [2024-07-12 20:43:55.022180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.134 ms 00:27:00.909 [2024-07-12 20:43:55.022191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.205847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.205939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:10.913 [2024-07-12 20:44:03.205969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8183.607 ms 00:27:10.913 [2024-07-12 20:44:03.205980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.207182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.207215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:10.913 [2024-07-12 20:44:03.207230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.179 ms 00:27:10.913 [2024-07-12 20:44:03.207241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.208364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.208395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:10.913 [2024-07-12 20:44:03.208410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.076 ms 00:27:10.913 [2024-07-12 20:44:03.208427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.210756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.210790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:10.913 [2024-07-12 20:44:03.210804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.275 ms 00:27:10.913 [2024-07-12 20:44:03.210815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.213512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.213741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:10.913 [2024-07-12 20:44:03.213863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.660 ms 00:27:10.913 [2024-07-12 20:44:03.213886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.213980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.214009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:10.913 [2024-07-12 20:44:03.214029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:10.913 [2024-07-12 20:44:03.214040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.215417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.215453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:10.913 [2024-07-12 20:44:03.215494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.357 ms 00:27:10.913 [2024-07-12 20:44:03.215504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.216652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.216688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:10.913 [2024-07-12 20:44:03.216709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:27:10.913 [2024-07-12 20:44:03.216718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.218008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.218041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:10.913 [2024-07-12 20:44:03.218054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.256 ms 00:27:10.913 [2024-07-12 20:44:03.218064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.219161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.219201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:10.913 [2024-07-12 20:44:03.219215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.019 ms 00:27:10.913 [2024-07-12 20:44:03.219225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.219269] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:10.913 [2024-07-12 20:44:03.219292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:10.913 [2024-07-12 20:44:03.219322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:10.913 [2024-07-12 20:44:03.219335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:10.913 [2024-07-12 20:44:03.219347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:10.913 [2024-07-12 20:44:03.219563] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:10.913 [2024-07-12 20:44:03.219575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 81f6ab7e-04f4-4020-ac3e-dfc4e2c58e3c 00:27:10.913 [2024-07-12 20:44:03.219588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:10.913 [2024-07-12 20:44:03.219599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:10.913 [2024-07-12 20:44:03.219609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:10.913 [2024-07-12 20:44:03.219621] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:10.913 [2024-07-12 20:44:03.219638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:10.913 [2024-07-12 20:44:03.219665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:10.913 [2024-07-12 20:44:03.219676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:10.913 [2024-07-12 20:44:03.219686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:10.913 [2024-07-12 20:44:03.219696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:10.913 [2024-07-12 20:44:03.219709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.219721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:10.913 [2024-07-12 20:44:03.219733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:27:10.913 [2024-07-12 20:44:03.219744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.222653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.222703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:10.913 [2024-07-12 20:44:03.222731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.887 ms 00:27:10.913 [2024-07-12 20:44:03.222742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.222924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.913 [2024-07-12 20:44:03.222937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:10.913 [2024-07-12 20:44:03.222949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.145 ms 00:27:10.913 [2024-07-12 20:44:03.222959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.234182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.234260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:10.913 [2024-07-12 20:44:03.234293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.234315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.234359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.234385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:10.913 [2024-07-12 20:44:03.234397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.234408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.234497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.234522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:10.913 [2024-07-12 20:44:03.234535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.234559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.234590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.234603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:10.913 [2024-07-12 20:44:03.234614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.234625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.253833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.253916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:10.913 [2024-07-12 20:44:03.253947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.253974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.269306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.269397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:10.913 [2024-07-12 20:44:03.269416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.269429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.269581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.269600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:10.913 [2024-07-12 20:44:03.269613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.269636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.913 [2024-07-12 20:44:03.269720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.913 [2024-07-12 20:44:03.269737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:10.913 [2024-07-12 20:44:03.269763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.913 [2024-07-12 20:44:03.269781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.914 [2024-07-12 20:44:03.269881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.914 [2024-07-12 20:44:03.269910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:10.914 [2024-07-12 20:44:03.269923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.914 [2024-07-12 20:44:03.269934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.914 [2024-07-12 20:44:03.270017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.914 [2024-07-12 20:44:03.270044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:10.914 [2024-07-12 20:44:03.270058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.914 [2024-07-12 20:44:03.270077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.914 [2024-07-12 20:44:03.270135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.914 [2024-07-12 20:44:03.270150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:10.914 [2024-07-12 20:44:03.270163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.914 [2024-07-12 20:44:03.270174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.914 [2024-07-12 20:44:03.270259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:10.914 [2024-07-12 20:44:03.270284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:10.914 [2024-07-12 20:44:03.270298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:10.914 [2024-07-12 20:44:03.270310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.914 [2024-07-12 20:44:03.270491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8249.547 ms, result 0 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=97822 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 97822 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 97822 ']' 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.483 20:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:11.483 [2024-07-12 20:44:05.478722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:11.483 [2024-07-12 20:44:05.479346] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97822 ] 00:27:11.743 [2024-07-12 20:44:05.639908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:11.743 [2024-07-12 20:44:05.659614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.743 [2024-07-12 20:44:05.749196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.311 [2024-07-12 20:44:06.178733] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:12.311 [2024-07-12 20:44:06.178856] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:12.311 [2024-07-12 20:44:06.325138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.325218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:12.311 [2024-07-12 20:44:06.325278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:12.311 [2024-07-12 20:44:06.325293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.325440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.325460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:12.311 [2024-07-12 20:44:06.325473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:27:12.311 [2024-07-12 20:44:06.325500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.325579] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:12.311 [2024-07-12 20:44:06.325870] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:12.311 [2024-07-12 20:44:06.325920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.325933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:12.311 [2024-07-12 20:44:06.325961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:27:12.311 [2024-07-12 20:44:06.325973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.328697] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:12.311 [2024-07-12 20:44:06.332680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.332731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:12.311 [2024-07-12 20:44:06.332783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.989 ms 00:27:12.311 [2024-07-12 20:44:06.332795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.332888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.332908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:12.311 [2024-07-12 20:44:06.332921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:12.311 [2024-07-12 20:44:06.332937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.345705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.345751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:12.311 [2024-07-12 20:44:06.345787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.673 ms 00:27:12.311 [2024-07-12 20:44:06.345798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.345873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.345892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:12.311 [2024-07-12 20:44:06.345909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:12.311 [2024-07-12 20:44:06.345920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.346005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.346023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:12.311 [2024-07-12 20:44:06.346036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:12.311 [2024-07-12 20:44:06.346057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.346096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:12.311 [2024-07-12 20:44:06.348855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.348892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:12.311 [2024-07-12 20:44:06.348907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.768 ms 00:27:12.311 [2024-07-12 20:44:06.348929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.348968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.349001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:12.311 [2024-07-12 20:44:06.349024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:12.311 [2024-07-12 20:44:06.349036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.349069] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:12.311 [2024-07-12 20:44:06.349102] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:12.311 [2024-07-12 20:44:06.349166] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:12.311 [2024-07-12 20:44:06.349202] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:27:12.311 [2024-07-12 20:44:06.349329] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:12.311 [2024-07-12 20:44:06.349350] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:12.311 [2024-07-12 20:44:06.349381] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:12.311 [2024-07-12 20:44:06.349403] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:12.311 [2024-07-12 20:44:06.349418] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:12.311 [2024-07-12 20:44:06.349431] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:12.311 [2024-07-12 20:44:06.349443] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:12.311 [2024-07-12 20:44:06.349473] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:12.311 [2024-07-12 20:44:06.349484] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:12.311 [2024-07-12 20:44:06.349497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.349508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:12.311 [2024-07-12 20:44:06.349520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:27:12.311 [2024-07-12 20:44:06.349531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.349618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.311 [2024-07-12 20:44:06.349634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:12.311 [2024-07-12 20:44:06.349647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:12.311 [2024-07-12 20:44:06.349657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.311 [2024-07-12 20:44:06.349765] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:12.311 [2024-07-12 20:44:06.349782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:12.311 [2024-07-12 20:44:06.349809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:12.311 [2024-07-12 20:44:06.349822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.311 [2024-07-12 20:44:06.349833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:12.311 [2024-07-12 20:44:06.349843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:12.311 [2024-07-12 20:44:06.349854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:12.311 [2024-07-12 20:44:06.349864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:12.311 [2024-07-12 20:44:06.349877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:12.311 [2024-07-12 20:44:06.349887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.311 [2024-07-12 20:44:06.349897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:12.311 [2024-07-12 20:44:06.349907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:12.311 [2024-07-12 20:44:06.349919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.311 [2024-07-12 20:44:06.349929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:12.311 [2024-07-12 20:44:06.349941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:12.311 [2024-07-12 20:44:06.349952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.312 [2024-07-12 20:44:06.349962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:12.312 [2024-07-12 20:44:06.349972] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:12.312 [2024-07-12 20:44:06.349990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.312 [2024-07-12 20:44:06.350001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:12.312 [2024-07-12 20:44:06.350012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:12.312 [2024-07-12 20:44:06.350022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.312 [2024-07-12 20:44:06.350032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:12.312 [2024-07-12 20:44:06.350043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:12.312 [2024-07-12 20:44:06.350053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.312 [2024-07-12 20:44:06.350063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:12.312 [2024-07-12 20:44:06.350073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:12.312 [2024-07-12 20:44:06.350083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.312 [2024-07-12 20:44:06.350093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:12.312 [2024-07-12 20:44:06.350103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:12.312 [2024-07-12 20:44:06.350113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:12.312 [2024-07-12 20:44:06.350145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:12.312 [2024-07-12 20:44:06.350156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:12.312 [2024-07-12 20:44:06.350167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.312 [2024-07-12 20:44:06.350191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:12.312 [2024-07-12 20:44:06.350202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:12.312 [2024-07-12 20:44:06.350213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.312 [2024-07-12 20:44:06.350224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:12.312 [2024-07-12 20:44:06.350235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:12.312 [2024-07-12 20:44:06.350245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.312 [2024-07-12 20:44:06.350256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:12.312 [2024-07-12 20:44:06.350267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:12.312 [2024-07-12 20:44:06.350277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.312 [2024-07-12 20:44:06.350814] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:12.312 [2024-07-12 20:44:06.350997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:12.312 [2024-07-12 20:44:06.351050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:12.312 [2024-07-12 20:44:06.351218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:12.312 [2024-07-12 20:44:06.351383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:12.312 [2024-07-12 20:44:06.351544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:12.312 [2024-07-12 20:44:06.351660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:12.312 [2024-07-12 20:44:06.351721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:12.312 [2024-07-12 20:44:06.351765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:12.312 [2024-07-12 20:44:06.351900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:12.312 [2024-07-12 20:44:06.351946] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:12.312 [2024-07-12 20:44:06.352150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:12.312 [2024-07-12 20:44:06.352303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:12.312 [2024-07-12 20:44:06.352341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:12.312 [2024-07-12 20:44:06.352353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:12.312 [2024-07-12 20:44:06.352365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:12.312 [2024-07-12 20:44:06.352377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:12.312 [2024-07-12 20:44:06.352484] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:12.312 [2024-07-12 20:44:06.352498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:12.312 [2024-07-12 20:44:06.352523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:12.312 [2024-07-12 20:44:06.352535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:12.312 [2024-07-12 20:44:06.352547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:12.312 [2024-07-12 20:44:06.352576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.312 [2024-07-12 20:44:06.352589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:12.312 [2024-07-12 20:44:06.352602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.851 ms 00:27:12.312 [2024-07-12 20:44:06.352627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.312 [2024-07-12 20:44:06.352736] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:12.312 [2024-07-12 20:44:06.352758] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:14.842 [2024-07-12 20:44:08.773126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.773518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:14.842 [2024-07-12 20:44:08.773638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2420.386 ms 00:27:14.842 [2024-07-12 20:44:08.773687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.793619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.793850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:14.842 [2024-07-12 20:44:08.793986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.686 ms 00:27:14.842 [2024-07-12 20:44:08.794116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.794282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.794398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:14.842 [2024-07-12 20:44:08.794516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:14.842 [2024-07-12 20:44:08.794560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.812647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.812850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:14.842 [2024-07-12 20:44:08.812969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.897 ms 00:27:14.842 [2024-07-12 20:44:08.813032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.813211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.813304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:14.842 [2024-07-12 20:44:08.813450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:14.842 [2024-07-12 20:44:08.813510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.814496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.814633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:14.842 [2024-07-12 20:44:08.814752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.821 ms 00:27:14.842 [2024-07-12 20:44:08.814797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.814941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.814999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:14.842 [2024-07-12 20:44:08.815049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:14.842 [2024-07-12 20:44:08.815207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.828272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.828517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:14.842 [2024-07-12 20:44:08.828544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.945 ms 00:27:14.842 [2024-07-12 20:44:08.828558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.832387] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:14.842 [2024-07-12 20:44:08.832429] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:14.842 [2024-07-12 20:44:08.832463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.832476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:14.842 [2024-07-12 20:44:08.832488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.735 ms 00:27:14.842 [2024-07-12 20:44:08.832499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.837435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.837489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:14.842 [2024-07-12 20:44:08.837550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.889 ms 00:27:14.842 [2024-07-12 20:44:08.837563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.839306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.839382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:14.842 [2024-07-12 20:44:08.839410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.708 ms 00:27:14.842 [2024-07-12 20:44:08.839436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.841073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.841111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:14.842 [2024-07-12 20:44:08.841126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.559 ms 00:27:14.842 [2024-07-12 20:44:08.841154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.841541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.841571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:14.842 [2024-07-12 20:44:08.841586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:27:14.842 [2024-07-12 20:44:08.841604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.879144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.879222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:14.842 [2024-07-12 20:44:08.879270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.511 ms 00:27:14.842 [2024-07-12 20:44:08.879285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.886513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:14.842 [2024-07-12 20:44:08.887229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.887292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:14.842 [2024-07-12 20:44:08.887311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.869 ms 00:27:14.842 [2024-07-12 20:44:08.887324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.887415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.887437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:14.842 [2024-07-12 20:44:08.887452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:14.842 [2024-07-12 20:44:08.887485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.887578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.842 [2024-07-12 20:44:08.887614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:14.842 [2024-07-12 20:44:08.887629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:14.842 [2024-07-12 20:44:08.887654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.842 [2024-07-12 20:44:08.887696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.843 [2024-07-12 20:44:08.887714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:14.843 [2024-07-12 20:44:08.887728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:14.843 [2024-07-12 20:44:08.887797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.843 [2024-07-12 20:44:08.887861] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:14.843 [2024-07-12 20:44:08.887899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.843 [2024-07-12 20:44:08.887921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:14.843 [2024-07-12 20:44:08.887933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:14.843 [2024-07-12 20:44:08.887953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.843 [2024-07-12 20:44:08.891885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.843 [2024-07-12 20:44:08.891922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:14.843 [2024-07-12 20:44:08.891939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.898 ms 00:27:14.843 [2024-07-12 20:44:08.891950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.843 [2024-07-12 20:44:08.892035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.843 [2024-07-12 20:44:08.892060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:14.843 [2024-07-12 20:44:08.892073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:14.843 [2024-07-12 20:44:08.892084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.843 [2024-07-12 20:44:08.893725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2567.955 ms, result 0 00:27:14.843 [2024-07-12 20:44:08.908262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.843 [2024-07-12 20:44:08.924320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:14.843 [2024-07-12 20:44:08.932592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:14.843 20:44:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.843 20:44:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:14.843 20:44:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:14.843 20:44:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:14.843 20:44:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:15.101 [2024-07-12 20:44:09.236676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.101 [2024-07-12 20:44:09.236781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:15.101 [2024-07-12 20:44:09.236809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:15.101 [2024-07-12 20:44:09.236822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.101 [2024-07-12 20:44:09.236859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.101 [2024-07-12 20:44:09.236877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:15.101 [2024-07-12 20:44:09.236905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:15.101 [2024-07-12 20:44:09.236934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.101 [2024-07-12 20:44:09.236978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.101 [2024-07-12 20:44:09.236995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:15.101 [2024-07-12 20:44:09.237024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:15.101 [2024-07-12 20:44:09.237036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.101 [2024-07-12 20:44:09.237137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.464 ms, result 0 00:27:15.101 true 00:27:15.359 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:15.618 { 00:27:15.618 "name": "ftl", 00:27:15.618 "properties": [ 00:27:15.618 { 00:27:15.618 "name": "superblock_version", 00:27:15.618 "value": 5, 00:27:15.618 "read-only": true 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "name": "base_device", 00:27:15.618 "bands": [ 00:27:15.618 { 00:27:15.618 "id": 0, 00:27:15.618 "state": "CLOSED", 00:27:15.618 "validity": 1.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 1, 00:27:15.618 "state": "CLOSED", 00:27:15.618 "validity": 1.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 2, 00:27:15.618 "state": "CLOSED", 00:27:15.618 "validity": 0.007843137254901933 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 3, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 4, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 5, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 6, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 7, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 8, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 9, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 10, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 11, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 12, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 13, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 14, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.618 { 00:27:15.618 "id": 15, 00:27:15.618 "state": "FREE", 00:27:15.618 "validity": 0.0 00:27:15.618 }, 00:27:15.619 { 00:27:15.619 "id": 16, 00:27:15.619 "state": "FREE", 00:27:15.619 "validity": 0.0 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "id": 17, 00:27:15.619 "state": "FREE", 00:27:15.619 "validity": 0.0 00:27:15.619 } 00:27:15.619 ], 00:27:15.619 "read-only": true 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "name": "cache_device", 00:27:15.619 "type": "bdev", 00:27:15.619 "chunks": [ 00:27:15.619 { 00:27:15.619 "id": 0, 00:27:15.619 "state": "INACTIVE", 00:27:15.619 "utilization": 0.0 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "id": 1, 00:27:15.619 "state": "OPEN", 00:27:15.619 "utilization": 0.0 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "id": 2, 00:27:15.619 "state": "OPEN", 00:27:15.619 "utilization": 0.0 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "id": 3, 00:27:15.619 "state": "FREE", 00:27:15.619 "utilization": 0.0 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "id": 4, 00:27:15.619 "state": "FREE", 00:27:15.619 "utilization": 0.0 00:27:15.619 } 00:27:15.619 ], 00:27:15.619 "read-only": true 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "name": "verbose_mode", 00:27:15.619 "value": true, 00:27:15.619 "unit": "", 00:27:15.619 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:15.619 }, 00:27:15.619 { 00:27:15.619 "name": "prep_upgrade_on_shutdown", 00:27:15.619 "value": false, 00:27:15.619 "unit": "", 00:27:15.619 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:15.619 } 00:27:15.619 ] 00:27:15.619 } 00:27:15.619 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:15.619 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:15.619 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:15.878 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:15.878 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:15.878 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:15.878 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:15.878 20:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:16.138 Validate MD5 checksum, iteration 1 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:16.138 20:44:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:16.397 [2024-07-12 20:44:10.317346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:16.397 [2024-07-12 20:44:10.317867] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97884 ] 00:27:16.397 [2024-07-12 20:44:10.473762] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:16.397 [2024-07-12 20:44:10.498139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.656 [2024-07-12 20:44:10.602718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.163  Copying: 474/1024 [MB] (474 MBps) Copying: 946/1024 [MB] (472 MBps) Copying: 1024/1024 [MB] (average 472 MBps) 00:27:20.163 00:27:20.163 20:44:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:20.163 20:44:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:22.696 Validate MD5 checksum, iteration 2 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=089669d1429e70e271533f328b673fcf 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 089669d1429e70e271533f328b673fcf != \0\8\9\6\6\9\d\1\4\2\9\e\7\0\e\2\7\1\5\3\3\f\3\2\8\b\6\7\3\f\c\f ]] 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:22.696 20:44:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:22.696 [2024-07-12 20:44:16.407178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:22.696 [2024-07-12 20:44:16.407686] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97951 ] 00:27:22.696 [2024-07-12 20:44:16.562678] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:22.696 [2024-07-12 20:44:16.585000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.696 [2024-07-12 20:44:16.684095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.224  Copying: 435/1024 [MB] (435 MBps) Copying: 890/1024 [MB] (455 MBps) Copying: 1024/1024 [MB] (average 445 MBps) 00:27:27.224 00:27:27.224 20:44:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:27.224 20:44:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=db7f06b652f231a6851154a3bc52cbbc 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ db7f06b652f231a6851154a3bc52cbbc != \d\b\7\f\0\6\b\6\5\2\f\2\3\1\a\6\8\5\1\1\5\4\a\3\b\c\5\2\c\b\b\c ]] 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 97822 ]] 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 97822 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=98024 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 98024 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 98024 ']' 00:27:29.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.755 20:44:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:29.755 [2024-07-12 20:44:23.593375] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:29.755 [2024-07-12 20:44:23.593896] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98024 ] 00:27:29.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 97822 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:29.755 [2024-07-12 20:44:23.746771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:29.755 [2024-07-12 20:44:23.764103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.755 [2024-07-12 20:44:23.869185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.322 [2024-07-12 20:44:24.306612] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.322 [2024-07-12 20:44:24.306758] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.322 [2024-07-12 20:44:24.458307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.458436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:30.322 [2024-07-12 20:44:24.458469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:30.322 [2024-07-12 20:44:24.458481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.458577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.458598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.322 [2024-07-12 20:44:24.458610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:27:30.322 [2024-07-12 20:44:24.458631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.458684] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:30.322 [2024-07-12 20:44:24.459045] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:30.322 [2024-07-12 20:44:24.459075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.459088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:30.322 [2024-07-12 20:44:24.459100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:27:30.322 [2024-07-12 20:44:24.459127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.459732] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:30.322 [2024-07-12 20:44:24.466237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.466332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:30.322 [2024-07-12 20:44:24.466355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.498 ms 00:27:30.322 [2024-07-12 20:44:24.466367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.467394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.467441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:30.322 [2024-07-12 20:44:24.467458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:30.322 [2024-07-12 20:44:24.467490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.468006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.468050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:30.322 [2024-07-12 20:44:24.468065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.406 ms 00:27:30.322 [2024-07-12 20:44:24.468076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.468151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.468173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:30.322 [2024-07-12 20:44:24.468189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:30.322 [2024-07-12 20:44:24.468207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.468264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.322 [2024-07-12 20:44:24.468298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:30.322 [2024-07-12 20:44:24.468311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:30.322 [2024-07-12 20:44:24.468321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.322 [2024-07-12 20:44:24.468361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:30.581 [2024-07-12 20:44:24.469372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.582 [2024-07-12 20:44:24.469419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:30.582 [2024-07-12 20:44:24.469433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.023 ms 00:27:30.582 [2024-07-12 20:44:24.469444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.582 [2024-07-12 20:44:24.469488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.582 [2024-07-12 20:44:24.469519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:30.582 [2024-07-12 20:44:24.469531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:30.582 [2024-07-12 20:44:24.469542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.582 [2024-07-12 20:44:24.469571] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:30.582 [2024-07-12 20:44:24.469618] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:30.582 [2024-07-12 20:44:24.469668] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:30.582 [2024-07-12 20:44:24.469691] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:27:30.582 [2024-07-12 20:44:24.469785] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:30.582 [2024-07-12 20:44:24.469800] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:30.582 [2024-07-12 20:44:24.469824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:30.582 [2024-07-12 20:44:24.469847] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:30.582 [2024-07-12 20:44:24.469860] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:30.582 [2024-07-12 20:44:24.469881] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:30.582 [2024-07-12 20:44:24.469892] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:30.582 [2024-07-12 20:44:24.469906] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:30.582 [2024-07-12 20:44:24.469917] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:30.582 [2024-07-12 20:44:24.469928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.582 [2024-07-12 20:44:24.469939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:30.582 [2024-07-12 20:44:24.469950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:27:30.582 [2024-07-12 20:44:24.469969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.582 [2024-07-12 20:44:24.470067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.582 [2024-07-12 20:44:24.470082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:30.582 [2024-07-12 20:44:24.470093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:27:30.582 [2024-07-12 20:44:24.470104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.582 [2024-07-12 20:44:24.470233] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:30.582 [2024-07-12 20:44:24.470254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:30.582 [2024-07-12 20:44:24.470268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470307] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:30.582 [2024-07-12 20:44:24.470351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:30.582 [2024-07-12 20:44:24.470373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:30.582 [2024-07-12 20:44:24.470417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:30.582 [2024-07-12 20:44:24.470428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:30.582 [2024-07-12 20:44:24.470450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:30.582 [2024-07-12 20:44:24.470463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:30.582 [2024-07-12 20:44:24.470484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:30.582 [2024-07-12 20:44:24.470496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:30.582 [2024-07-12 20:44:24.470523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:30.582 [2024-07-12 20:44:24.470534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:30.582 [2024-07-12 20:44:24.470555] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:30.582 [2024-07-12 20:44:24.470566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:30.582 [2024-07-12 20:44:24.470586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:30.582 [2024-07-12 20:44:24.470596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:30.582 [2024-07-12 20:44:24.470616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:30.582 [2024-07-12 20:44:24.470627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:30.582 [2024-07-12 20:44:24.470648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:30.582 [2024-07-12 20:44:24.470658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:30.582 [2024-07-12 20:44:24.470679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:30.582 [2024-07-12 20:44:24.470692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:30.582 [2024-07-12 20:44:24.470713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:30.582 [2024-07-12 20:44:24.470744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:30.582 [2024-07-12 20:44:24.470774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:30.582 [2024-07-12 20:44:24.470784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470797] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:30.582 [2024-07-12 20:44:24.470837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:30.582 [2024-07-12 20:44:24.470849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.582 [2024-07-12 20:44:24.470887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:30.582 [2024-07-12 20:44:24.470910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:30.582 [2024-07-12 20:44:24.470926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:30.582 [2024-07-12 20:44:24.470936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:30.582 [2024-07-12 20:44:24.470945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:30.582 [2024-07-12 20:44:24.470955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:30.582 [2024-07-12 20:44:24.470967] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:30.582 [2024-07-12 20:44:24.470979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.470995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:30.582 [2024-07-12 20:44:24.471005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:30.582 [2024-07-12 20:44:24.471038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:30.582 [2024-07-12 20:44:24.471049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:30.582 [2024-07-12 20:44:24.471059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:30.582 [2024-07-12 20:44:24.471070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:30.582 [2024-07-12 20:44:24.471147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:30.582 [2024-07-12 20:44:24.471158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.582 [2024-07-12 20:44:24.471181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:30.582 [2024-07-12 20:44:24.471191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:30.582 [2024-07-12 20:44:24.471203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:30.582 [2024-07-12 20:44:24.471214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.582 [2024-07-12 20:44:24.471225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:30.583 [2024-07-12 20:44:24.471235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.041 ms 00:27:30.583 [2024-07-12 20:44:24.471245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.487668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.487745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:30.583 [2024-07-12 20:44:24.487781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.301 ms 00:27:30.583 [2024-07-12 20:44:24.487815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.487906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.487922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:30.583 [2024-07-12 20:44:24.487934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:30.583 [2024-07-12 20:44:24.487946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.505587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.505709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:30.583 [2024-07-12 20:44:24.505739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.520 ms 00:27:30.583 [2024-07-12 20:44:24.505757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.505855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.505871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:30.583 [2024-07-12 20:44:24.505883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.583 [2024-07-12 20:44:24.505894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.506076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.506093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:30.583 [2024-07-12 20:44:24.506119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:27:30.583 [2024-07-12 20:44:24.506130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.506202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.506222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:30.583 [2024-07-12 20:44:24.506234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:30.583 [2024-07-12 20:44:24.506272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.520133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.520217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:30.583 [2024-07-12 20:44:24.520258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.812 ms 00:27:30.583 [2024-07-12 20:44:24.520289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.520555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.520584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:30.583 [2024-07-12 20:44:24.520612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:30.583 [2024-07-12 20:44:24.520623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.537188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.537306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:30.583 [2024-07-12 20:44:24.537341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.529 ms 00:27:30.583 [2024-07-12 20:44:24.537370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.539124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.539174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:30.583 [2024-07-12 20:44:24.539203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.454 ms 00:27:30.583 [2024-07-12 20:44:24.539219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.568918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.569023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:30.583 [2024-07-12 20:44:24.569063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.633 ms 00:27:30.583 [2024-07-12 20:44:24.569076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.569453] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:30.583 [2024-07-12 20:44:24.569635] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:30.583 [2024-07-12 20:44:24.569807] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:30.583 [2024-07-12 20:44:24.569970] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:30.583 [2024-07-12 20:44:24.569985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.570012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:30.583 [2024-07-12 20:44:24.570030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:27:30.583 [2024-07-12 20:44:24.570041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.570176] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:30.583 [2024-07-12 20:44:24.570198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.570209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:30.583 [2024-07-12 20:44:24.570222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:30.583 [2024-07-12 20:44:24.570248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.573384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.573440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:30.583 [2024-07-12 20:44:24.573456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.063 ms 00:27:30.583 [2024-07-12 20:44:24.573467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.574124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.583 [2024-07-12 20:44:24.574168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:30.583 [2024-07-12 20:44:24.574183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:30.583 [2024-07-12 20:44:24.574193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.583 [2024-07-12 20:44:24.574645] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:31.152 [2024-07-12 20:44:25.173804] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:31.152 [2024-07-12 20:44:25.174122] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:31.719 [2024-07-12 20:44:25.741367] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:31.719 [2024-07-12 20:44:25.741524] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:31.719 [2024-07-12 20:44:25.741546] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:31.719 [2024-07-12 20:44:25.741573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.741623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:31.719 [2024-07-12 20:44:25.741656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1167.251 ms 00:27:31.719 [2024-07-12 20:44:25.741682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.719 [2024-07-12 20:44:25.741743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.741775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:31.719 [2024-07-12 20:44:25.741786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:31.719 [2024-07-12 20:44:25.741796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.719 [2024-07-12 20:44:25.752166] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:31.719 [2024-07-12 20:44:25.752358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.752403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:31.719 [2024-07-12 20:44:25.752417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.542 ms 00:27:31.719 [2024-07-12 20:44:25.752429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.719 [2024-07-12 20:44:25.753259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.753310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:31.719 [2024-07-12 20:44:25.753328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.723 ms 00:27:31.719 [2024-07-12 20:44:25.753341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.719 [2024-07-12 20:44:25.756159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.756193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:31.719 [2024-07-12 20:44:25.756208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.777 ms 00:27:31.719 [2024-07-12 20:44:25.756225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.719 [2024-07-12 20:44:25.756302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.756321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:31.719 [2024-07-12 20:44:25.756334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:31.719 [2024-07-12 20:44:25.756361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.719 [2024-07-12 20:44:25.756535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.756554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:31.719 [2024-07-12 20:44:25.756567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:31.719 [2024-07-12 20:44:25.756578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.719 [2024-07-12 20:44:25.756615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.719 [2024-07-12 20:44:25.756645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:31.719 [2024-07-12 20:44:25.756658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:31.720 [2024-07-12 20:44:25.756700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.720 [2024-07-12 20:44:25.756737] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:31.720 [2024-07-12 20:44:25.756752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.720 [2024-07-12 20:44:25.756762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:31.720 [2024-07-12 20:44:25.756773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:31.720 [2024-07-12 20:44:25.756783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.720 [2024-07-12 20:44:25.756883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.720 [2024-07-12 20:44:25.756936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:31.720 [2024-07-12 20:44:25.757002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:31.720 [2024-07-12 20:44:25.757014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.720 [2024-07-12 20:44:25.759137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1299.780 ms, result 0 00:27:31.720 [2024-07-12 20:44:25.772795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.720 [2024-07-12 20:44:25.788904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:31.720 [2024-07-12 20:44:25.797042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:32.301 20:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:32.301 20:44:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:32.301 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:32.301 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:32.301 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:32.301 Validate MD5 checksum, iteration 1 00:27:32.301 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:32.301 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:32.302 20:44:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:32.302 [2024-07-12 20:44:26.338804] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:32.302 [2024-07-12 20:44:26.339114] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98053 ] 00:27:32.573 [2024-07-12 20:44:26.496365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:32.573 [2024-07-12 20:44:26.521227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.573 [2024-07-12 20:44:26.611332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.871  Copying: 417/1024 [MB] (417 MBps) Copying: 842/1024 [MB] (425 MBps) Copying: 1024/1024 [MB] (average 424 MBps) 00:27:37.871 00:27:37.871 20:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:37.871 20:44:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:39.786 Validate MD5 checksum, iteration 2 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=089669d1429e70e271533f328b673fcf 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 089669d1429e70e271533f328b673fcf != \0\8\9\6\6\9\d\1\4\2\9\e\7\0\e\2\7\1\5\3\3\f\3\2\8\b\6\7\3\f\c\f ]] 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:39.786 20:44:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:39.786 [2024-07-12 20:44:33.879054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:39.786 [2024-07-12 20:44:33.879298] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98137 ] 00:27:40.044 [2024-07-12 20:44:34.026439] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:40.044 [2024-07-12 20:44:34.051789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.044 [2024-07-12 20:44:34.147889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.153  Copying: 449/1024 [MB] (449 MBps) Copying: 893/1024 [MB] (444 MBps) Copying: 1024/1024 [MB] (average 446 MBps) 00:27:44.153 00:27:44.153 20:44:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:44.153 20:44:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=db7f06b652f231a6851154a3bc52cbbc 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ db7f06b652f231a6851154a3bc52cbbc != \d\b\7\f\0\6\b\6\5\2\f\2\3\1\a\6\8\5\1\1\5\4\a\3\b\c\5\2\c\b\b\c ]] 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 98024 ]] 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 98024 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 98024 ']' 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 98024 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98024 00:27:46.701 killing process with pid 98024 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98024' 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 98024 00:27:46.701 20:44:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 98024 00:27:46.701 [2024-07-12 20:44:40.829518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:46.701 [2024-07-12 20:44:40.836815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.836858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:46.701 [2024-07-12 20:44:40.836878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:46.701 [2024-07-12 20:44:40.836890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.836920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:46.701 [2024-07-12 20:44:40.838174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.838205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:46.701 [2024-07-12 20:44:40.838220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.234 ms 00:27:46.701 [2024-07-12 20:44:40.838231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.838532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.838580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:46.701 [2024-07-12 20:44:40.838594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:27:46.701 [2024-07-12 20:44:40.838605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.839956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.839997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:46.701 [2024-07-12 20:44:40.840012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.329 ms 00:27:46.701 [2024-07-12 20:44:40.840036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.841321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.841358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:46.701 [2024-07-12 20:44:40.841379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.244 ms 00:27:46.701 [2024-07-12 20:44:40.841389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.842943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.843004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:46.701 [2024-07-12 20:44:40.843030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.505 ms 00:27:46.701 [2024-07-12 20:44:40.843057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.844833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.845042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:46.701 [2024-07-12 20:44:40.845069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.734 ms 00:27:46.701 [2024-07-12 20:44:40.845081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.845190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.845209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:46.701 [2024-07-12 20:44:40.845233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:46.701 [2024-07-12 20:44:40.845265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.846631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.846666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:46.701 [2024-07-12 20:44:40.846681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.325 ms 00:27:46.701 [2024-07-12 20:44:40.846691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.701 [2024-07-12 20:44:40.848070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.701 [2024-07-12 20:44:40.848105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:46.701 [2024-07-12 20:44:40.848119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.340 ms 00:27:46.701 [2024-07-12 20:44:40.848129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.849384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.961 [2024-07-12 20:44:40.849417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:46.961 [2024-07-12 20:44:40.849438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:27:46.961 [2024-07-12 20:44:40.849449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.850584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.961 [2024-07-12 20:44:40.850620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:46.961 [2024-07-12 20:44:40.850634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.064 ms 00:27:46.961 [2024-07-12 20:44:40.850644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.850683] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:46.961 [2024-07-12 20:44:40.850705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.961 [2024-07-12 20:44:40.850718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.961 [2024-07-12 20:44:40.850729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:46.961 [2024-07-12 20:44:40.850740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.961 [2024-07-12 20:44:40.850925] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:46.961 [2024-07-12 20:44:40.850944] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 81f6ab7e-04f4-4020-ac3e-dfc4e2c58e3c 00:27:46.961 [2024-07-12 20:44:40.850957] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:46.961 [2024-07-12 20:44:40.850967] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:46.961 [2024-07-12 20:44:40.850977] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:46.961 [2024-07-12 20:44:40.851004] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:46.961 [2024-07-12 20:44:40.851015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:46.961 [2024-07-12 20:44:40.851026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:46.961 [2024-07-12 20:44:40.851047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:46.961 [2024-07-12 20:44:40.851057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:46.961 [2024-07-12 20:44:40.851066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:46.961 [2024-07-12 20:44:40.851077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.961 [2024-07-12 20:44:40.851095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:46.961 [2024-07-12 20:44:40.851107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:27:46.961 [2024-07-12 20:44:40.851118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.854046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.961 [2024-07-12 20:44:40.854080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:46.961 [2024-07-12 20:44:40.854095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.883 ms 00:27:46.961 [2024-07-12 20:44:40.854106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.854278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.961 [2024-07-12 20:44:40.854293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:46.961 [2024-07-12 20:44:40.854311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:27:46.961 [2024-07-12 20:44:40.854321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.865423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.961 [2024-07-12 20:44:40.865471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:46.961 [2024-07-12 20:44:40.865502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.961 [2024-07-12 20:44:40.865513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.865560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.961 [2024-07-12 20:44:40.865575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:46.961 [2024-07-12 20:44:40.865593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.961 [2024-07-12 20:44:40.865604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.961 [2024-07-12 20:44:40.865743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.961 [2024-07-12 20:44:40.865762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:46.961 [2024-07-12 20:44:40.865775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.865786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.865811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.865823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:46.962 [2024-07-12 20:44:40.865834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.865844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.883441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.883541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:46.962 [2024-07-12 20:44:40.883560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.883572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.897093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.897173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.962 [2024-07-12 20:44:40.897192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.897216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.897408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.897436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.962 [2024-07-12 20:44:40.897450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.897462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.897553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.897570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.962 [2024-07-12 20:44:40.897584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.897595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.897746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.897763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.962 [2024-07-12 20:44:40.897793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.897804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.897888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.897905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:46.962 [2024-07-12 20:44:40.897927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.897938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.897999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.898015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.962 [2024-07-12 20:44:40.898027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.898038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.898105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.962 [2024-07-12 20:44:40.898127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.962 [2024-07-12 20:44:40.898141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.962 [2024-07-12 20:44:40.898158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.962 [2024-07-12 20:44:40.898372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 61.466 ms, result 0 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:47.222 Remove shared memory files 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid97822 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:47.222 ************************************ 00:27:47.222 END TEST ftl_upgrade_shutdown 00:27:47.222 ************************************ 00:27:47.222 00:27:47.222 real 1m21.440s 00:27:47.222 user 1m49.319s 00:27:47.222 sys 0m27.099s 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.222 20:44:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.222 20:44:41 ftl -- common/autotest_common.sh@1142 -- # return 0 00:27:47.222 20:44:41 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:47.222 20:44:41 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:47.222 20:44:41 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:27:47.222 20:44:41 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.222 20:44:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:47.222 ************************************ 00:27:47.222 START TEST ftl_restore_fast 00:27:47.222 ************************************ 00:27:47.222 20:44:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:47.481 * Looking for test storage... 00:27:47.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:47.481 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.OUYLpWG3wg 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=98284 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 98284 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 98284 ']' 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.482 20:44:41 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:47.482 [2024-07-12 20:44:41.588611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:47.482 [2024-07-12 20:44:41.589089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98284 ] 00:27:47.752 [2024-07-12 20:44:41.743155] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:47.752 [2024-07-12 20:44:41.769502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.752 [2024-07-12 20:44:41.891715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.686 20:44:42 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.686 20:44:42 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:27:48.686 20:44:42 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:48.687 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:48.687 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:48.687 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:48.687 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:48.687 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:48.944 20:44:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:49.202 { 00:27:49.202 "name": "nvme0n1", 00:27:49.202 "aliases": [ 00:27:49.202 "7ead4b2c-4246-4ed9-bbfa-98d3a44e8908" 00:27:49.202 ], 00:27:49.202 "product_name": "NVMe disk", 00:27:49.202 "block_size": 4096, 00:27:49.202 "num_blocks": 1310720, 00:27:49.202 "uuid": "7ead4b2c-4246-4ed9-bbfa-98d3a44e8908", 00:27:49.202 "assigned_rate_limits": { 00:27:49.202 "rw_ios_per_sec": 0, 00:27:49.202 "rw_mbytes_per_sec": 0, 00:27:49.202 "r_mbytes_per_sec": 0, 00:27:49.202 "w_mbytes_per_sec": 0 00:27:49.202 }, 00:27:49.202 "claimed": true, 00:27:49.202 "claim_type": "read_many_write_one", 00:27:49.202 "zoned": false, 00:27:49.202 "supported_io_types": { 00:27:49.202 "read": true, 00:27:49.202 "write": true, 00:27:49.202 "unmap": true, 00:27:49.202 "flush": true, 00:27:49.202 "reset": true, 00:27:49.202 "nvme_admin": true, 00:27:49.202 "nvme_io": true, 00:27:49.202 "nvme_io_md": false, 00:27:49.202 "write_zeroes": true, 00:27:49.202 "zcopy": false, 00:27:49.202 "get_zone_info": false, 00:27:49.202 "zone_management": false, 00:27:49.202 "zone_append": false, 00:27:49.202 "compare": true, 00:27:49.202 "compare_and_write": false, 00:27:49.202 "abort": true, 00:27:49.202 "seek_hole": false, 00:27:49.202 "seek_data": false, 00:27:49.202 "copy": true, 00:27:49.202 "nvme_iov_md": false 00:27:49.202 }, 00:27:49.202 "driver_specific": { 00:27:49.202 "nvme": [ 00:27:49.202 { 00:27:49.202 "pci_address": "0000:00:11.0", 00:27:49.202 "trid": { 00:27:49.202 "trtype": "PCIe", 00:27:49.202 "traddr": "0000:00:11.0" 00:27:49.202 }, 00:27:49.202 "ctrlr_data": { 00:27:49.202 "cntlid": 0, 00:27:49.202 "vendor_id": "0x1b36", 00:27:49.202 "model_number": "QEMU NVMe Ctrl", 00:27:49.202 "serial_number": "12341", 00:27:49.202 "firmware_revision": "8.0.0", 00:27:49.202 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:49.202 "oacs": { 00:27:49.202 "security": 0, 00:27:49.202 "format": 1, 00:27:49.202 "firmware": 0, 00:27:49.202 "ns_manage": 1 00:27:49.202 }, 00:27:49.202 "multi_ctrlr": false, 00:27:49.202 "ana_reporting": false 00:27:49.202 }, 00:27:49.202 "vs": { 00:27:49.202 "nvme_version": "1.4" 00:27:49.202 }, 00:27:49.202 "ns_data": { 00:27:49.202 "id": 1, 00:27:49.202 "can_share": false 00:27:49.202 } 00:27:49.202 } 00:27:49.202 ], 00:27:49.202 "mp_policy": "active_passive" 00:27:49.202 } 00:27:49.202 } 00:27:49.202 ]' 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:49.202 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:49.461 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=420a62d6-eaed-4f10-87fe-26ab990ebb11 00:27:49.461 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:49.461 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 420a62d6-eaed-4f10-87fe-26ab990ebb11 00:27:49.719 20:44:43 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=e66f2dbf-658c-491c-b478-7f4bfdad11b0 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e66f2dbf-658c-491c-b478-7f4bfdad11b0 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:50.288 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:50.547 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:50.547 { 00:27:50.547 "name": "daa95657-c975-4a7d-8d11-a0fd376cf2b5", 00:27:50.547 "aliases": [ 00:27:50.547 "lvs/nvme0n1p0" 00:27:50.547 ], 00:27:50.547 "product_name": "Logical Volume", 00:27:50.547 "block_size": 4096, 00:27:50.547 "num_blocks": 26476544, 00:27:50.547 "uuid": "daa95657-c975-4a7d-8d11-a0fd376cf2b5", 00:27:50.547 "assigned_rate_limits": { 00:27:50.547 "rw_ios_per_sec": 0, 00:27:50.547 "rw_mbytes_per_sec": 0, 00:27:50.547 "r_mbytes_per_sec": 0, 00:27:50.547 "w_mbytes_per_sec": 0 00:27:50.547 }, 00:27:50.547 "claimed": false, 00:27:50.547 "zoned": false, 00:27:50.547 "supported_io_types": { 00:27:50.547 "read": true, 00:27:50.547 "write": true, 00:27:50.547 "unmap": true, 00:27:50.547 "flush": false, 00:27:50.547 "reset": true, 00:27:50.547 "nvme_admin": false, 00:27:50.547 "nvme_io": false, 00:27:50.547 "nvme_io_md": false, 00:27:50.547 "write_zeroes": true, 00:27:50.547 "zcopy": false, 00:27:50.547 "get_zone_info": false, 00:27:50.547 "zone_management": false, 00:27:50.547 "zone_append": false, 00:27:50.547 "compare": false, 00:27:50.547 "compare_and_write": false, 00:27:50.547 "abort": false, 00:27:50.547 "seek_hole": true, 00:27:50.547 "seek_data": true, 00:27:50.547 "copy": false, 00:27:50.547 "nvme_iov_md": false 00:27:50.547 }, 00:27:50.547 "driver_specific": { 00:27:50.547 "lvol": { 00:27:50.547 "lvol_store_uuid": "e66f2dbf-658c-491c-b478-7f4bfdad11b0", 00:27:50.547 "base_bdev": "nvme0n1", 00:27:50.547 "thin_provision": true, 00:27:50.547 "num_allocated_clusters": 0, 00:27:50.547 "snapshot": false, 00:27:50.547 "clone": false, 00:27:50.547 "esnap_clone": false 00:27:50.547 } 00:27:50.547 } 00:27:50.547 } 00:27:50.547 ]' 00:27:50.547 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:50.807 20:44:44 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:51.066 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:51.326 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:51.326 { 00:27:51.326 "name": "daa95657-c975-4a7d-8d11-a0fd376cf2b5", 00:27:51.326 "aliases": [ 00:27:51.326 "lvs/nvme0n1p0" 00:27:51.326 ], 00:27:51.326 "product_name": "Logical Volume", 00:27:51.326 "block_size": 4096, 00:27:51.326 "num_blocks": 26476544, 00:27:51.326 "uuid": "daa95657-c975-4a7d-8d11-a0fd376cf2b5", 00:27:51.326 "assigned_rate_limits": { 00:27:51.326 "rw_ios_per_sec": 0, 00:27:51.326 "rw_mbytes_per_sec": 0, 00:27:51.326 "r_mbytes_per_sec": 0, 00:27:51.326 "w_mbytes_per_sec": 0 00:27:51.326 }, 00:27:51.326 "claimed": false, 00:27:51.326 "zoned": false, 00:27:51.326 "supported_io_types": { 00:27:51.326 "read": true, 00:27:51.326 "write": true, 00:27:51.326 "unmap": true, 00:27:51.326 "flush": false, 00:27:51.326 "reset": true, 00:27:51.326 "nvme_admin": false, 00:27:51.326 "nvme_io": false, 00:27:51.326 "nvme_io_md": false, 00:27:51.326 "write_zeroes": true, 00:27:51.326 "zcopy": false, 00:27:51.326 "get_zone_info": false, 00:27:51.326 "zone_management": false, 00:27:51.326 "zone_append": false, 00:27:51.326 "compare": false, 00:27:51.326 "compare_and_write": false, 00:27:51.326 "abort": false, 00:27:51.326 "seek_hole": true, 00:27:51.326 "seek_data": true, 00:27:51.326 "copy": false, 00:27:51.326 "nvme_iov_md": false 00:27:51.326 }, 00:27:51.326 "driver_specific": { 00:27:51.326 "lvol": { 00:27:51.326 "lvol_store_uuid": "e66f2dbf-658c-491c-b478-7f4bfdad11b0", 00:27:51.326 "base_bdev": "nvme0n1", 00:27:51.326 "thin_provision": true, 00:27:51.326 "num_allocated_clusters": 0, 00:27:51.326 "snapshot": false, 00:27:51.326 "clone": false, 00:27:51.326 "esnap_clone": false 00:27:51.326 } 00:27:51.326 } 00:27:51.326 } 00:27:51.326 ]' 00:27:51.326 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:51.326 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:51.326 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:51.585 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:51.585 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:51.585 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:51.585 20:44:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:51.585 20:44:45 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:51.844 20:44:45 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:51.844 20:44:45 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:51.844 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:51.844 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:51.844 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:51.844 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:51.844 20:44:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b daa95657-c975-4a7d-8d11-a0fd376cf2b5 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:52.103 { 00:27:52.103 "name": "daa95657-c975-4a7d-8d11-a0fd376cf2b5", 00:27:52.103 "aliases": [ 00:27:52.103 "lvs/nvme0n1p0" 00:27:52.103 ], 00:27:52.103 "product_name": "Logical Volume", 00:27:52.103 "block_size": 4096, 00:27:52.103 "num_blocks": 26476544, 00:27:52.103 "uuid": "daa95657-c975-4a7d-8d11-a0fd376cf2b5", 00:27:52.103 "assigned_rate_limits": { 00:27:52.103 "rw_ios_per_sec": 0, 00:27:52.103 "rw_mbytes_per_sec": 0, 00:27:52.103 "r_mbytes_per_sec": 0, 00:27:52.103 "w_mbytes_per_sec": 0 00:27:52.103 }, 00:27:52.103 "claimed": false, 00:27:52.103 "zoned": false, 00:27:52.103 "supported_io_types": { 00:27:52.103 "read": true, 00:27:52.103 "write": true, 00:27:52.103 "unmap": true, 00:27:52.103 "flush": false, 00:27:52.103 "reset": true, 00:27:52.103 "nvme_admin": false, 00:27:52.103 "nvme_io": false, 00:27:52.103 "nvme_io_md": false, 00:27:52.103 "write_zeroes": true, 00:27:52.103 "zcopy": false, 00:27:52.103 "get_zone_info": false, 00:27:52.103 "zone_management": false, 00:27:52.103 "zone_append": false, 00:27:52.103 "compare": false, 00:27:52.103 "compare_and_write": false, 00:27:52.103 "abort": false, 00:27:52.103 "seek_hole": true, 00:27:52.103 "seek_data": true, 00:27:52.103 "copy": false, 00:27:52.103 "nvme_iov_md": false 00:27:52.103 }, 00:27:52.103 "driver_specific": { 00:27:52.103 "lvol": { 00:27:52.103 "lvol_store_uuid": "e66f2dbf-658c-491c-b478-7f4bfdad11b0", 00:27:52.103 "base_bdev": "nvme0n1", 00:27:52.103 "thin_provision": true, 00:27:52.103 "num_allocated_clusters": 0, 00:27:52.103 "snapshot": false, 00:27:52.103 "clone": false, 00:27:52.103 "esnap_clone": false 00:27:52.103 } 00:27:52.103 } 00:27:52.103 } 00:27:52.103 ]' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d daa95657-c975-4a7d-8d11-a0fd376cf2b5 --l2p_dram_limit 10' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:52.103 20:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d daa95657-c975-4a7d-8d11-a0fd376cf2b5 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:52.363 [2024-07-12 20:44:46.402101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.402187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:52.363 [2024-07-12 20:44:46.402213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:52.363 [2024-07-12 20:44:46.402228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.402362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.402408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:52.363 [2024-07-12 20:44:46.402429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:52.363 [2024-07-12 20:44:46.402447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.402496] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:52.363 [2024-07-12 20:44:46.402988] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:52.363 [2024-07-12 20:44:46.403024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.403041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:52.363 [2024-07-12 20:44:46.403054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:27:52.363 [2024-07-12 20:44:46.403078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.403373] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dbe0db69-32e5-416f-a8d2-bf12c6095658 00:27:52.363 [2024-07-12 20:44:46.406008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.406052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:52.363 [2024-07-12 20:44:46.406074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:52.363 [2024-07-12 20:44:46.406087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.420528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.420607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:52.363 [2024-07-12 20:44:46.420636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.330 ms 00:27:52.363 [2024-07-12 20:44:46.420649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.420849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.420869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:52.363 [2024-07-12 20:44:46.420886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:27:52.363 [2024-07-12 20:44:46.420898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.421018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.421037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:52.363 [2024-07-12 20:44:46.421065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:52.363 [2024-07-12 20:44:46.421087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.421128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:52.363 [2024-07-12 20:44:46.424368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.424411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:52.363 [2024-07-12 20:44:46.424429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:27:52.363 [2024-07-12 20:44:46.424453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.424512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.363 [2024-07-12 20:44:46.424533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:52.363 [2024-07-12 20:44:46.424547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:52.363 [2024-07-12 20:44:46.424565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.363 [2024-07-12 20:44:46.424593] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:52.363 [2024-07-12 20:44:46.424790] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:52.363 [2024-07-12 20:44:46.424809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:52.363 [2024-07-12 20:44:46.424828] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:52.363 [2024-07-12 20:44:46.424852] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:52.363 [2024-07-12 20:44:46.424881] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:52.363 [2024-07-12 20:44:46.424896] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:52.363 [2024-07-12 20:44:46.424910] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:52.363 [2024-07-12 20:44:46.424921] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:52.363 [2024-07-12 20:44:46.424934] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:52.364 [2024-07-12 20:44:46.424947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.364 [2024-07-12 20:44:46.424969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:52.364 [2024-07-12 20:44:46.424981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:27:52.364 [2024-07-12 20:44:46.424995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.364 [2024-07-12 20:44:46.425081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.364 [2024-07-12 20:44:46.425114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:52.364 [2024-07-12 20:44:46.425142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:52.364 [2024-07-12 20:44:46.425159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.364 [2024-07-12 20:44:46.425369] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:52.364 [2024-07-12 20:44:46.425408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:52.364 [2024-07-12 20:44:46.425424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:52.364 [2024-07-12 20:44:46.425440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:52.364 [2024-07-12 20:44:46.425491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:52.364 [2024-07-12 20:44:46.425519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:52.364 [2024-07-12 20:44:46.425547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:52.364 [2024-07-12 20:44:46.425573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:52.364 [2024-07-12 20:44:46.425588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:52.364 [2024-07-12 20:44:46.425599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:52.364 [2024-07-12 20:44:46.425655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:52.364 [2024-07-12 20:44:46.425666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:52.364 [2024-07-12 20:44:46.425680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:52.364 [2024-07-12 20:44:46.425718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:52.364 [2024-07-12 20:44:46.425728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:52.364 [2024-07-12 20:44:46.425754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425777] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.364 [2024-07-12 20:44:46.425788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:52.364 [2024-07-12 20:44:46.425804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.364 [2024-07-12 20:44:46.425860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:52.364 [2024-07-12 20:44:46.425872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.364 [2024-07-12 20:44:46.425911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:52.364 [2024-07-12 20:44:46.425928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:52.364 [2024-07-12 20:44:46.425955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:52.364 [2024-07-12 20:44:46.425966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:52.364 [2024-07-12 20:44:46.425987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:52.364 [2024-07-12 20:44:46.425997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:52.364 [2024-07-12 20:44:46.426011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:52.364 [2024-07-12 20:44:46.426022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:52.364 [2024-07-12 20:44:46.426035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:52.364 [2024-07-12 20:44:46.426046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:52.364 [2024-07-12 20:44:46.426059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.364 [2024-07-12 20:44:46.426070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:52.364 [2024-07-12 20:44:46.426083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:52.364 [2024-07-12 20:44:46.426093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.364 [2024-07-12 20:44:46.426106] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:52.364 [2024-07-12 20:44:46.426117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:52.364 [2024-07-12 20:44:46.426134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:52.364 [2024-07-12 20:44:46.426145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:52.364 [2024-07-12 20:44:46.426164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:52.364 [2024-07-12 20:44:46.426174] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:52.364 [2024-07-12 20:44:46.426189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:52.364 [2024-07-12 20:44:46.426200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:52.364 [2024-07-12 20:44:46.426213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:52.364 [2024-07-12 20:44:46.426224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:52.364 [2024-07-12 20:44:46.426243] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:52.364 [2024-07-12 20:44:46.426257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.364 [2024-07-12 20:44:46.426273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:52.364 [2024-07-12 20:44:46.426284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:52.364 [2024-07-12 20:44:46.426309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:52.364 [2024-07-12 20:44:46.426335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:52.364 [2024-07-12 20:44:46.426349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:52.364 [2024-07-12 20:44:46.426376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:52.364 [2024-07-12 20:44:46.426395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:52.364 [2024-07-12 20:44:46.426407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:52.364 [2024-07-12 20:44:46.426420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:52.364 [2024-07-12 20:44:46.426432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:52.364 [2024-07-12 20:44:46.426445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:52.364 [2024-07-12 20:44:46.426457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:52.364 [2024-07-12 20:44:46.426470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:52.364 [2024-07-12 20:44:46.426482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:52.364 [2024-07-12 20:44:46.426496] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:52.364 [2024-07-12 20:44:46.426509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.364 [2024-07-12 20:44:46.426525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:52.364 [2024-07-12 20:44:46.426536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:52.364 [2024-07-12 20:44:46.426550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:52.364 [2024-07-12 20:44:46.426561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:52.364 [2024-07-12 20:44:46.426578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.364 [2024-07-12 20:44:46.426590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:52.364 [2024-07-12 20:44:46.426607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:27:52.364 [2024-07-12 20:44:46.426619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.364 [2024-07-12 20:44:46.426698] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:52.364 [2024-07-12 20:44:46.426716] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:55.651 [2024-07-12 20:44:49.510865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.651 [2024-07-12 20:44:49.510982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:55.651 [2024-07-12 20:44:49.511012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3084.150 ms 00:27:55.651 [2024-07-12 20:44:49.511025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.651 [2024-07-12 20:44:49.533089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.651 [2024-07-12 20:44:49.533158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.651 [2024-07-12 20:44:49.533183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.897 ms 00:27:55.651 [2024-07-12 20:44:49.533201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.651 [2024-07-12 20:44:49.533398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.651 [2024-07-12 20:44:49.533416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:55.651 [2024-07-12 20:44:49.533431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:55.651 [2024-07-12 20:44:49.533442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.651 [2024-07-12 20:44:49.551107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.651 [2024-07-12 20:44:49.551163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.651 [2024-07-12 20:44:49.551184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.575 ms 00:27:55.651 [2024-07-12 20:44:49.551200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.651 [2024-07-12 20:44:49.551299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.651 [2024-07-12 20:44:49.551319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.651 [2024-07-12 20:44:49.551343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:55.651 [2024-07-12 20:44:49.551383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.651 [2024-07-12 20:44:49.552295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.552341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.652 [2024-07-12 20:44:49.552378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:27:55.652 [2024-07-12 20:44:49.552390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.552566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.552588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.652 [2024-07-12 20:44:49.552605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:27:55.652 [2024-07-12 20:44:49.552616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.564951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.564990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:55.652 [2024-07-12 20:44:49.565030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.303 ms 00:27:55.652 [2024-07-12 20:44:49.565042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.575419] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:55.652 [2024-07-12 20:44:49.580857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.580894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:55.652 [2024-07-12 20:44:49.580911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.699 ms 00:27:55.652 [2024-07-12 20:44:49.580926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.660790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.660896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:55.652 [2024-07-12 20:44:49.660929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.825 ms 00:27:55.652 [2024-07-12 20:44:49.660970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.661256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.661303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:55.652 [2024-07-12 20:44:49.661328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:27:55.652 [2024-07-12 20:44:49.661343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.665687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.665753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:55.652 [2024-07-12 20:44:49.665775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:27:55.652 [2024-07-12 20:44:49.665789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.669100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.669144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:55.652 [2024-07-12 20:44:49.669171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:27:55.652 [2024-07-12 20:44:49.669184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.669639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.669670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:55.652 [2024-07-12 20:44:49.669704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:27:55.652 [2024-07-12 20:44:49.669720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.713676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.713775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:55.652 [2024-07-12 20:44:49.713807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.926 ms 00:27:55.652 [2024-07-12 20:44:49.713822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.719716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.719765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:55.652 [2024-07-12 20:44:49.719784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.845 ms 00:27:55.652 [2024-07-12 20:44:49.719800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.723672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.723717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:55.652 [2024-07-12 20:44:49.723735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.796 ms 00:27:55.652 [2024-07-12 20:44:49.723749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.727996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.728051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:55.652 [2024-07-12 20:44:49.728069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.202 ms 00:27:55.652 [2024-07-12 20:44:49.728085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.728159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.728183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:55.652 [2024-07-12 20:44:49.728196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:55.652 [2024-07-12 20:44:49.728210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.728313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.652 [2024-07-12 20:44:49.728337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:55.652 [2024-07-12 20:44:49.728375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:55.652 [2024-07-12 20:44:49.728393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.652 [2024-07-12 20:44:49.730188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3327.418 ms, result 0 00:27:55.652 { 00:27:55.652 "name": "ftl0", 00:27:55.652 "uuid": "dbe0db69-32e5-416f-a8d2-bf12c6095658" 00:27:55.652 } 00:27:55.652 20:44:49 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:55.652 20:44:49 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:56.220 20:44:50 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:56.220 20:44:50 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:56.220 [2024-07-12 20:44:50.360035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.220 [2024-07-12 20:44:50.360133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:56.220 [2024-07-12 20:44:50.360178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:56.220 [2024-07-12 20:44:50.360200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.220 [2024-07-12 20:44:50.360285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:56.220 [2024-07-12 20:44:50.361563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.220 [2024-07-12 20:44:50.361609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:56.220 [2024-07-12 20:44:50.361656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:27:56.220 [2024-07-12 20:44:50.361684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.220 [2024-07-12 20:44:50.362012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.220 [2024-07-12 20:44:50.362036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:56.220 [2024-07-12 20:44:50.362053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:27:56.220 [2024-07-12 20:44:50.362067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.220 [2024-07-12 20:44:50.365077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.220 [2024-07-12 20:44:50.365112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:56.220 [2024-07-12 20:44:50.365127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:27:56.220 [2024-07-12 20:44:50.365140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.371489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.371547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:56.481 [2024-07-12 20:44:50.371563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:27:56.481 [2024-07-12 20:44:50.371582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.373088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.373134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:56.481 [2024-07-12 20:44:50.373149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:27:56.481 [2024-07-12 20:44:50.373162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.378521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.378576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:56.481 [2024-07-12 20:44:50.378593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.320 ms 00:27:56.481 [2024-07-12 20:44:50.378607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.378748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.378774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:56.481 [2024-07-12 20:44:50.378787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:56.481 [2024-07-12 20:44:50.378800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.380979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.381022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:56.481 [2024-07-12 20:44:50.381036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:27:56.481 [2024-07-12 20:44:50.381049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.382898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.382945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:56.481 [2024-07-12 20:44:50.382965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:27:56.481 [2024-07-12 20:44:50.382977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.384233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.384316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:56.481 [2024-07-12 20:44:50.384338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:27:56.481 [2024-07-12 20:44:50.384351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.385807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.481 [2024-07-12 20:44:50.385845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:56.481 [2024-07-12 20:44:50.385859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:27:56.481 [2024-07-12 20:44:50.385873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.481 [2024-07-12 20:44:50.385911] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:56.481 [2024-07-12 20:44:50.385939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.385953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.385967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.385978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:56.481 [2024-07-12 20:44:50.386839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.386985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:56.482 [2024-07-12 20:44:50.387315] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:56.482 [2024-07-12 20:44:50.387328] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbe0db69-32e5-416f-a8d2-bf12c6095658 00:27:56.482 [2024-07-12 20:44:50.387343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:56.482 [2024-07-12 20:44:50.387354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:56.482 [2024-07-12 20:44:50.387368] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:56.482 [2024-07-12 20:44:50.387379] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:56.482 [2024-07-12 20:44:50.387395] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:56.482 [2024-07-12 20:44:50.387406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:56.482 [2024-07-12 20:44:50.387419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:56.482 [2024-07-12 20:44:50.387428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:56.482 [2024-07-12 20:44:50.387440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:56.482 [2024-07-12 20:44:50.387452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.482 [2024-07-12 20:44:50.387464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:56.482 [2024-07-12 20:44:50.387476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:27:56.482 [2024-07-12 20:44:50.387489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.390423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.482 [2024-07-12 20:44:50.390467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:56.482 [2024-07-12 20:44:50.390485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:27:56.482 [2024-07-12 20:44:50.390525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.390714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.482 [2024-07-12 20:44:50.390733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:56.482 [2024-07-12 20:44:50.390745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:27:56.482 [2024-07-12 20:44:50.390759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.401561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.401612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.482 [2024-07-12 20:44:50.401631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.401644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.401726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.401745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.482 [2024-07-12 20:44:50.401757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.401770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.401873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.401901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.482 [2024-07-12 20:44:50.401914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.401931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.401966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.401985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.482 [2024-07-12 20:44:50.402014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.402027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.421648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.421760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.482 [2024-07-12 20:44:50.421785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.421799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.435336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.435436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.482 [2024-07-12 20:44:50.435456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.435470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.435664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.435693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.482 [2024-07-12 20:44:50.435708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.435723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.435814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.435882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.482 [2024-07-12 20:44:50.435911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.435924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.436029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.436061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.482 [2024-07-12 20:44:50.436076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.436090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.436151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.436182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:56.482 [2024-07-12 20:44:50.436196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.436210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.436280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.436304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.482 [2024-07-12 20:44:50.436316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.436330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.436398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.482 [2024-07-12 20:44:50.436418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.482 [2024-07-12 20:44:50.436441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.482 [2024-07-12 20:44:50.436454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.482 [2024-07-12 20:44:50.436647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.578 ms, result 0 00:27:56.482 true 00:27:56.482 20:44:50 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 98284 00:27:56.482 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 98284 ']' 00:27:56.482 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 98284 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98284 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:56.483 killing process with pid 98284 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98284' 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 98284 00:27:56.483 20:44:50 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 98284 00:27:59.767 20:44:53 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:05.035 262144+0 records in 00:28:05.035 262144+0 records out 00:28:05.035 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.12371 s, 210 MB/s 00:28:05.035 20:44:58 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:06.939 20:45:01 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:06.939 [2024-07-12 20:45:01.086205] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:28:06.939 [2024-07-12 20:45:01.086397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98511 ] 00:28:07.198 [2024-07-12 20:45:01.232678] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:07.198 [2024-07-12 20:45:01.256433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.457 [2024-07-12 20:45:01.356559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.457 [2024-07-12 20:45:01.522859] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.457 [2024-07-12 20:45:01.522950] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.717 [2024-07-12 20:45:01.684782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.717 [2024-07-12 20:45:01.684841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:07.717 [2024-07-12 20:45:01.684868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:07.717 [2024-07-12 20:45:01.684879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.717 [2024-07-12 20:45:01.684961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.717 [2024-07-12 20:45:01.684980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.717 [2024-07-12 20:45:01.685003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:07.717 [2024-07-12 20:45:01.685020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.717 [2024-07-12 20:45:01.685062] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:07.717 [2024-07-12 20:45:01.685316] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:07.717 [2024-07-12 20:45:01.685340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.685352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.718 [2024-07-12 20:45:01.685367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:28:07.718 [2024-07-12 20:45:01.685378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.687731] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:07.718 [2024-07-12 20:45:01.691147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.691197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:07.718 [2024-07-12 20:45:01.691215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:28:07.718 [2024-07-12 20:45:01.691235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.691314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.691332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:07.718 [2024-07-12 20:45:01.691344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:07.718 [2024-07-12 20:45:01.691357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.702906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.702949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.718 [2024-07-12 20:45:01.702965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.486 ms 00:28:07.718 [2024-07-12 20:45:01.702975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.703068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.703085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.718 [2024-07-12 20:45:01.703101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:07.718 [2024-07-12 20:45:01.703111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.703194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.703222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:07.718 [2024-07-12 20:45:01.703234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:07.718 [2024-07-12 20:45:01.703265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.703302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:07.718 [2024-07-12 20:45:01.705835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.705866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.718 [2024-07-12 20:45:01.705885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:28:07.718 [2024-07-12 20:45:01.705905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.705962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.705978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:07.718 [2024-07-12 20:45:01.705990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:07.718 [2024-07-12 20:45:01.706000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.706026] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:07.718 [2024-07-12 20:45:01.706065] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:07.718 [2024-07-12 20:45:01.706126] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:07.718 [2024-07-12 20:45:01.706152] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:07.718 [2024-07-12 20:45:01.706259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:07.718 [2024-07-12 20:45:01.706286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:07.718 [2024-07-12 20:45:01.706326] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:07.718 [2024-07-12 20:45:01.706349] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706361] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706383] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:07.718 [2024-07-12 20:45:01.706394] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:07.718 [2024-07-12 20:45:01.706404] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:07.718 [2024-07-12 20:45:01.706414] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:07.718 [2024-07-12 20:45:01.706425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.706440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:07.718 [2024-07-12 20:45:01.706452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:28:07.718 [2024-07-12 20:45:01.706462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.706545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.718 [2024-07-12 20:45:01.706558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:07.718 [2024-07-12 20:45:01.706569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:07.718 [2024-07-12 20:45:01.706579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.718 [2024-07-12 20:45:01.706677] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:07.718 [2024-07-12 20:45:01.706693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:07.718 [2024-07-12 20:45:01.706709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:07.718 [2024-07-12 20:45:01.706739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:07.718 [2024-07-12 20:45:01.706769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.718 [2024-07-12 20:45:01.706788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:07.718 [2024-07-12 20:45:01.706797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:07.718 [2024-07-12 20:45:01.706809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.718 [2024-07-12 20:45:01.706821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:07.718 [2024-07-12 20:45:01.706831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:07.718 [2024-07-12 20:45:01.706854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:07.718 [2024-07-12 20:45:01.706873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:07.718 [2024-07-12 20:45:01.706901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:07.718 [2024-07-12 20:45:01.706930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:07.718 [2024-07-12 20:45:01.706957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:07.718 [2024-07-12 20:45:01.706965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.718 [2024-07-12 20:45:01.706980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:07.718 [2024-07-12 20:45:01.706992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:07.718 [2024-07-12 20:45:01.707001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.718 [2024-07-12 20:45:01.707010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:07.718 [2024-07-12 20:45:01.707020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:07.718 [2024-07-12 20:45:01.707029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.718 [2024-07-12 20:45:01.707038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:07.718 [2024-07-12 20:45:01.707049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:07.718 [2024-07-12 20:45:01.707057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.719 [2024-07-12 20:45:01.707067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:07.719 [2024-07-12 20:45:01.707076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:07.719 [2024-07-12 20:45:01.707085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.719 [2024-07-12 20:45:01.707094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:07.719 [2024-07-12 20:45:01.707103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:07.719 [2024-07-12 20:45:01.707112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.719 [2024-07-12 20:45:01.707122] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:07.719 [2024-07-12 20:45:01.707135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:07.719 [2024-07-12 20:45:01.707147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.719 [2024-07-12 20:45:01.707158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.719 [2024-07-12 20:45:01.707168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:07.719 [2024-07-12 20:45:01.707178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:07.719 [2024-07-12 20:45:01.707188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:07.719 [2024-07-12 20:45:01.707204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:07.719 [2024-07-12 20:45:01.707213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:07.719 [2024-07-12 20:45:01.707223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:07.719 [2024-07-12 20:45:01.707234] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:07.719 [2024-07-12 20:45:01.707263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.719 [2024-07-12 20:45:01.707275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:07.719 [2024-07-12 20:45:01.707286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:07.719 [2024-07-12 20:45:01.707297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:07.719 [2024-07-12 20:45:01.707307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:07.719 [2024-07-12 20:45:01.707317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:07.719 [2024-07-12 20:45:01.707331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:07.719 [2024-07-12 20:45:01.707342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:07.719 [2024-07-12 20:45:01.707353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:07.719 [2024-07-12 20:45:01.707364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:07.719 [2024-07-12 20:45:01.707374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:07.719 [2024-07-12 20:45:01.707384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:07.719 [2024-07-12 20:45:01.707395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:07.719 [2024-07-12 20:45:01.707405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:07.719 [2024-07-12 20:45:01.707416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:07.719 [2024-07-12 20:45:01.707425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:07.719 [2024-07-12 20:45:01.707445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.719 [2024-07-12 20:45:01.707457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.719 [2024-07-12 20:45:01.707467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:07.719 [2024-07-12 20:45:01.707477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:07.719 [2024-07-12 20:45:01.707487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:07.719 [2024-07-12 20:45:01.707499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.707543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:07.719 [2024-07-12 20:45:01.707558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:28:07.719 [2024-07-12 20:45:01.707577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.738653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.738733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.719 [2024-07-12 20:45:01.738759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.985 ms 00:28:07.719 [2024-07-12 20:45:01.738782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.738945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.738966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:07.719 [2024-07-12 20:45:01.739003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:07.719 [2024-07-12 20:45:01.739019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.755067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.755128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.719 [2024-07-12 20:45:01.755144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.936 ms 00:28:07.719 [2024-07-12 20:45:01.755156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.755208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.755230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.719 [2024-07-12 20:45:01.755268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:07.719 [2024-07-12 20:45:01.755279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.756087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.756130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.719 [2024-07-12 20:45:01.756145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:28:07.719 [2024-07-12 20:45:01.756156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.756351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.756387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.719 [2024-07-12 20:45:01.756404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:28:07.719 [2024-07-12 20:45:01.756414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.765982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.766016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.719 [2024-07-12 20:45:01.766043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.541 ms 00:28:07.719 [2024-07-12 20:45:01.766054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.769688] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:07.719 [2024-07-12 20:45:01.769738] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:07.719 [2024-07-12 20:45:01.769759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.769772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:07.719 [2024-07-12 20:45:01.769784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:28:07.719 [2024-07-12 20:45:01.769794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.719 [2024-07-12 20:45:01.784084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.719 [2024-07-12 20:45:01.784141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:07.719 [2024-07-12 20:45:01.784157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.246 ms 00:28:07.719 [2024-07-12 20:45:01.784195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.786065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.786099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:07.720 [2024-07-12 20:45:01.786116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:28:07.720 [2024-07-12 20:45:01.786125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.787643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.787676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:07.720 [2024-07-12 20:45:01.787691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:28:07.720 [2024-07-12 20:45:01.787701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.788058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.788100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:07.720 [2024-07-12 20:45:01.788115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:28:07.720 [2024-07-12 20:45:01.788130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.815468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.815579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:07.720 [2024-07-12 20:45:01.815601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.311 ms 00:28:07.720 [2024-07-12 20:45:01.815613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.823757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:07.720 [2024-07-12 20:45:01.829036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.829080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:07.720 [2024-07-12 20:45:01.829107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.359 ms 00:28:07.720 [2024-07-12 20:45:01.829118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.829260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.829288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:07.720 [2024-07-12 20:45:01.829302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:07.720 [2024-07-12 20:45:01.829318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.829464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.829488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:07.720 [2024-07-12 20:45:01.829501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:07.720 [2024-07-12 20:45:01.829511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.829544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.829558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:07.720 [2024-07-12 20:45:01.829569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:07.720 [2024-07-12 20:45:01.829591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.829640] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:07.720 [2024-07-12 20:45:01.829656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.829680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:07.720 [2024-07-12 20:45:01.829707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:07.720 [2024-07-12 20:45:01.829718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.834829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.834866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:07.720 [2024-07-12 20:45:01.834881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.082 ms 00:28:07.720 [2024-07-12 20:45:01.834891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.834968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.720 [2024-07-12 20:45:01.835015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:07.720 [2024-07-12 20:45:01.835033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:07.720 [2024-07-12 20:45:01.835043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.720 [2024-07-12 20:45:01.836757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.364 ms, result 0 00:28:53.061  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 72/1024 [MB] (25 MBps) Copying: 97/1024 [MB] (24 MBps) Copying: 122/1024 [MB] (24 MBps) Copying: 146/1024 [MB] (24 MBps) Copying: 173/1024 [MB] (26 MBps) Copying: 196/1024 [MB] (23 MBps) Copying: 220/1024 [MB] (23 MBps) Copying: 243/1024 [MB] (23 MBps) Copying: 267/1024 [MB] (24 MBps) Copying: 291/1024 [MB] (24 MBps) Copying: 316/1024 [MB] (24 MBps) Copying: 340/1024 [MB] (23 MBps) Copying: 364/1024 [MB] (24 MBps) Copying: 388/1024 [MB] (24 MBps) Copying: 413/1024 [MB] (24 MBps) Copying: 436/1024 [MB] (23 MBps) Copying: 457/1024 [MB] (21 MBps) Copying: 480/1024 [MB] (22 MBps) Copying: 503/1024 [MB] (22 MBps) Copying: 524/1024 [MB] (21 MBps) Copying: 546/1024 [MB] (22 MBps) Copying: 567/1024 [MB] (21 MBps) Copying: 589/1024 [MB] (21 MBps) Copying: 610/1024 [MB] (20 MBps) Copying: 631/1024 [MB] (21 MBps) Copying: 652/1024 [MB] (20 MBps) Copying: 673/1024 [MB] (20 MBps) Copying: 694/1024 [MB] (21 MBps) Copying: 714/1024 [MB] (20 MBps) Copying: 734/1024 [MB] (20 MBps) Copying: 755/1024 [MB] (20 MBps) Copying: 776/1024 [MB] (20 MBps) Copying: 797/1024 [MB] (20 MBps) Copying: 818/1024 [MB] (21 MBps) Copying: 840/1024 [MB] (21 MBps) Copying: 862/1024 [MB] (22 MBps) Copying: 885/1024 [MB] (22 MBps) Copying: 906/1024 [MB] (21 MBps) Copying: 929/1024 [MB] (22 MBps) Copying: 951/1024 [MB] (22 MBps) Copying: 973/1024 [MB] (22 MBps) Copying: 997/1024 [MB] (23 MBps) Copying: 1019/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-12 20:45:47.063984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.061 [2024-07-12 20:45:47.064077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:53.061 [2024-07-12 20:45:47.064112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:53.061 [2024-07-12 20:45:47.064125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.061 [2024-07-12 20:45:47.064161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:53.061 [2024-07-12 20:45:47.065450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.061 [2024-07-12 20:45:47.065484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:53.061 [2024-07-12 20:45:47.065499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:28:53.061 [2024-07-12 20:45:47.065525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.061 [2024-07-12 20:45:47.067225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.061 [2024-07-12 20:45:47.067290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:53.061 [2024-07-12 20:45:47.067318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.668 ms 00:28:53.061 [2024-07-12 20:45:47.067331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.061 [2024-07-12 20:45:47.067371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.061 [2024-07-12 20:45:47.067389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:53.061 [2024-07-12 20:45:47.067402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:53.061 [2024-07-12 20:45:47.067415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.061 [2024-07-12 20:45:47.067479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.061 [2024-07-12 20:45:47.067498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:53.061 [2024-07-12 20:45:47.067511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:53.061 [2024-07-12 20:45:47.067534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.061 [2024-07-12 20:45:47.067569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:53.061 [2024-07-12 20:45:47.067608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.067896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:53.061 [2024-07-12 20:45:47.068842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.068989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:53.062 [2024-07-12 20:45:47.069324] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:53.062 [2024-07-12 20:45:47.069338] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbe0db69-32e5-416f-a8d2-bf12c6095658 00:28:53.062 [2024-07-12 20:45:47.069396] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:53.062 [2024-07-12 20:45:47.069408] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:53.062 [2024-07-12 20:45:47.069420] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:53.062 [2024-07-12 20:45:47.069448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:53.062 [2024-07-12 20:45:47.069460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:53.062 [2024-07-12 20:45:47.069472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:53.062 [2024-07-12 20:45:47.069490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:53.062 [2024-07-12 20:45:47.069501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:53.062 [2024-07-12 20:45:47.069512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:53.062 [2024-07-12 20:45:47.069524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.062 [2024-07-12 20:45:47.069547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:53.062 [2024-07-12 20:45:47.069560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:28:53.062 [2024-07-12 20:45:47.069571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.072468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.062 [2024-07-12 20:45:47.072499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:53.062 [2024-07-12 20:45:47.072514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:28:53.062 [2024-07-12 20:45:47.072526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.072745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.062 [2024-07-12 20:45:47.072764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:53.062 [2024-07-12 20:45:47.072777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:28:53.062 [2024-07-12 20:45:47.072789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.082570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.082626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:53.062 [2024-07-12 20:45:47.082644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.082657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.082735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.082753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:53.062 [2024-07-12 20:45:47.082766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.082792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.082867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.082888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:53.062 [2024-07-12 20:45:47.082902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.082913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.082944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.082961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:53.062 [2024-07-12 20:45:47.082974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.082986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.100113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.100188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:53.062 [2024-07-12 20:45:47.100208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.100221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.113511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.113573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:53.062 [2024-07-12 20:45:47.113593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.113605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.113687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.113707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:53.062 [2024-07-12 20:45:47.113721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.113733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.113784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.113831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:53.062 [2024-07-12 20:45:47.113854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.113881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.113970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.113992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:53.062 [2024-07-12 20:45:47.114007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.114018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.114064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.114083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:53.062 [2024-07-12 20:45:47.114103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.114115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.114168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.114186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:53.062 [2024-07-12 20:45:47.114200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.114212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.114305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.062 [2024-07-12 20:45:47.114333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:53.062 [2024-07-12 20:45:47.114348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.062 [2024-07-12 20:45:47.114368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.062 [2024-07-12 20:45:47.114565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 50.528 ms, result 0 00:28:53.629 00:28:53.629 00:28:53.629 20:45:47 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:53.629 [2024-07-12 20:45:47.730026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:28:53.629 [2024-07-12 20:45:47.730215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98965 ] 00:28:53.888 [2024-07-12 20:45:47.882651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:53.888 [2024-07-12 20:45:47.899823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.888 [2024-07-12 20:45:47.974813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.148 [2024-07-12 20:45:48.124733] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:54.148 [2024-07-12 20:45:48.124847] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:54.148 [2024-07-12 20:45:48.285150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.285206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:54.148 [2024-07-12 20:45:48.285229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:54.148 [2024-07-12 20:45:48.285260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.285339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.285363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:54.148 [2024-07-12 20:45:48.285391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:54.148 [2024-07-12 20:45:48.285403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.285437] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:54.148 [2024-07-12 20:45:48.285730] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:54.148 [2024-07-12 20:45:48.285771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.285785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:54.148 [2024-07-12 20:45:48.285803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:28:54.148 [2024-07-12 20:45:48.285816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.286207] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:54.148 [2024-07-12 20:45:48.286235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.286280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:54.148 [2024-07-12 20:45:48.286293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:54.148 [2024-07-12 20:45:48.286305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.286371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.286390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:54.148 [2024-07-12 20:45:48.286402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:54.148 [2024-07-12 20:45:48.286414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.286775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.286805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:54.148 [2024-07-12 20:45:48.286820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:28:54.148 [2024-07-12 20:45:48.286831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.286910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.286930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:54.148 [2024-07-12 20:45:48.286953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:54.148 [2024-07-12 20:45:48.286965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.287008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.287026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:54.148 [2024-07-12 20:45:48.287039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:54.148 [2024-07-12 20:45:48.287056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.287086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:54.148 [2024-07-12 20:45:48.289750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.289787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:54.148 [2024-07-12 20:45:48.289810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:28:54.148 [2024-07-12 20:45:48.289822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.289861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.289879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:54.148 [2024-07-12 20:45:48.289897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:54.148 [2024-07-12 20:45:48.289908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.289954] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:54.148 [2024-07-12 20:45:48.289987] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:54.148 [2024-07-12 20:45:48.290025] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:54.148 [2024-07-12 20:45:48.290052] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:54.148 [2024-07-12 20:45:48.290143] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:54.148 [2024-07-12 20:45:48.290166] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:54.148 [2024-07-12 20:45:48.290181] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:54.148 [2024-07-12 20:45:48.290196] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:54.148 [2024-07-12 20:45:48.290221] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:54.148 [2024-07-12 20:45:48.290233] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:54.148 [2024-07-12 20:45:48.290278] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:54.148 [2024-07-12 20:45:48.290301] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:54.148 [2024-07-12 20:45:48.290321] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:54.148 [2024-07-12 20:45:48.290350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.290361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:54.148 [2024-07-12 20:45:48.290374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:28:54.148 [2024-07-12 20:45:48.290401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.290489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.148 [2024-07-12 20:45:48.290509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:54.148 [2024-07-12 20:45:48.290520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:54.148 [2024-07-12 20:45:48.290531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.148 [2024-07-12 20:45:48.290618] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:54.148 [2024-07-12 20:45:48.290638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:54.148 [2024-07-12 20:45:48.290651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:54.148 [2024-07-12 20:45:48.290685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.148 [2024-07-12 20:45:48.290712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:54.148 [2024-07-12 20:45:48.290723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:54.148 [2024-07-12 20:45:48.290734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:54.148 [2024-07-12 20:45:48.290746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:54.148 [2024-07-12 20:45:48.290758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:54.148 [2024-07-12 20:45:48.290769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:54.149 [2024-07-12 20:45:48.290780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:54.149 [2024-07-12 20:45:48.290790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:54.149 [2024-07-12 20:45:48.290801] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:54.149 [2024-07-12 20:45:48.290812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:54.149 [2024-07-12 20:45:48.290829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:54.149 [2024-07-12 20:45:48.290840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.149 [2024-07-12 20:45:48.290851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:54.149 [2024-07-12 20:45:48.290874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:54.149 [2024-07-12 20:45:48.290887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.149 [2024-07-12 20:45:48.290898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:54.149 [2024-07-12 20:45:48.290914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:54.149 [2024-07-12 20:45:48.290926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.149 [2024-07-12 20:45:48.290938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:54.149 [2024-07-12 20:45:48.290949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:54.149 [2024-07-12 20:45:48.290960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.149 [2024-07-12 20:45:48.290970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:54.149 [2024-07-12 20:45:48.290981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:54.149 [2024-07-12 20:45:48.290992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.149 [2024-07-12 20:45:48.291002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:54.149 [2024-07-12 20:45:48.291013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:54.149 [2024-07-12 20:45:48.291024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.149 [2024-07-12 20:45:48.291034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:54.149 [2024-07-12 20:45:48.291045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:54.149 [2024-07-12 20:45:48.291056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:54.149 [2024-07-12 20:45:48.291067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:54.149 [2024-07-12 20:45:48.291078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:54.149 [2024-07-12 20:45:48.291096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:54.149 [2024-07-12 20:45:48.291110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:54.149 [2024-07-12 20:45:48.291121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:54.149 [2024-07-12 20:45:48.291141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.149 [2024-07-12 20:45:48.291151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:54.149 [2024-07-12 20:45:48.291162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:54.149 [2024-07-12 20:45:48.291174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.149 [2024-07-12 20:45:48.291196] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:54.149 [2024-07-12 20:45:48.291207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:54.149 [2024-07-12 20:45:48.291218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:54.149 [2024-07-12 20:45:48.291740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.149 [2024-07-12 20:45:48.291807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:54.149 [2024-07-12 20:45:48.291853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:54.149 [2024-07-12 20:45:48.292009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:54.149 [2024-07-12 20:45:48.292064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:54.149 [2024-07-12 20:45:48.292108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:54.149 [2024-07-12 20:45:48.292226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:54.149 [2024-07-12 20:45:48.292301] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:54.149 [2024-07-12 20:45:48.292463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:54.149 [2024-07-12 20:45:48.292532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:54.149 [2024-07-12 20:45:48.292693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:54.149 [2024-07-12 20:45:48.292856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:54.149 [2024-07-12 20:45:48.293128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:54.409 [2024-07-12 20:45:48.293195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:54.409 [2024-07-12 20:45:48.293394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:54.409 [2024-07-12 20:45:48.293414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:54.409 [2024-07-12 20:45:48.293425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:54.409 [2024-07-12 20:45:48.293439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:54.409 [2024-07-12 20:45:48.293451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:54.409 [2024-07-12 20:45:48.293463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:54.409 [2024-07-12 20:45:48.293474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:54.409 [2024-07-12 20:45:48.293486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:54.409 [2024-07-12 20:45:48.293504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:54.409 [2024-07-12 20:45:48.293519] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:54.409 [2024-07-12 20:45:48.293532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:54.409 [2024-07-12 20:45:48.293545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:54.409 [2024-07-12 20:45:48.293557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:54.409 [2024-07-12 20:45:48.293569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:54.409 [2024-07-12 20:45:48.293581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:54.409 [2024-07-12 20:45:48.293594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.293607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:54.409 [2024-07-12 20:45:48.293619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:28:54.409 [2024-07-12 20:45:48.293638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.315965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.316036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:54.409 [2024-07-12 20:45:48.316069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.228 ms 00:28:54.409 [2024-07-12 20:45:48.316083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.316196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.316215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:54.409 [2024-07-12 20:45:48.316229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:54.409 [2024-07-12 20:45:48.316268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.331665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.331713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:54.409 [2024-07-12 20:45:48.331740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.280 ms 00:28:54.409 [2024-07-12 20:45:48.331755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.331807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.331827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:54.409 [2024-07-12 20:45:48.331840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:54.409 [2024-07-12 20:45:48.331860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.332032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.332053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:54.409 [2024-07-12 20:45:48.332069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:54.409 [2024-07-12 20:45:48.332081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.332244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.332293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:54.409 [2024-07-12 20:45:48.332312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:28:54.409 [2024-07-12 20:45:48.332325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.342340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.342382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:54.409 [2024-07-12 20:45:48.342401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.969 ms 00:28:54.409 [2024-07-12 20:45:48.342413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.342599] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:54.409 [2024-07-12 20:45:48.342625] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:54.409 [2024-07-12 20:45:48.342641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.342658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:54.409 [2024-07-12 20:45:48.342671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:54.409 [2024-07-12 20:45:48.342683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.354029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.354066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:54.409 [2024-07-12 20:45:48.354082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.319 ms 00:28:54.409 [2024-07-12 20:45:48.354094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.354222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.354258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:54.409 [2024-07-12 20:45:48.354279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:28:54.409 [2024-07-12 20:45:48.354309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.354396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.354417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:54.409 [2024-07-12 20:45:48.354430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:54.409 [2024-07-12 20:45:48.354442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.354775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.354805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:54.409 [2024-07-12 20:45:48.354820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:28:54.409 [2024-07-12 20:45:48.354833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.354876] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:54.409 [2024-07-12 20:45:48.354912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.354924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:54.409 [2024-07-12 20:45:48.354950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:54.409 [2024-07-12 20:45:48.354962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.364018] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:54.409 [2024-07-12 20:45:48.364301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.364330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:54.409 [2024-07-12 20:45:48.364344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.310 ms 00:28:54.409 [2024-07-12 20:45:48.364356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.366725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.366764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:54.409 [2024-07-12 20:45:48.366792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:28:54.409 [2024-07-12 20:45:48.366805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.366913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.366940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:54.409 [2024-07-12 20:45:48.366954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:54.409 [2024-07-12 20:45:48.366966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.409 [2024-07-12 20:45:48.367004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.409 [2024-07-12 20:45:48.367020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:54.409 [2024-07-12 20:45:48.367033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:54.409 [2024-07-12 20:45:48.367045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.410 [2024-07-12 20:45:48.367092] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:54.410 [2024-07-12 20:45:48.367111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.410 [2024-07-12 20:45:48.367126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:54.410 [2024-07-12 20:45:48.367142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:54.410 [2024-07-12 20:45:48.367166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.410 [2024-07-12 20:45:48.372482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.410 [2024-07-12 20:45:48.372524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:54.410 [2024-07-12 20:45:48.372542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.287 ms 00:28:54.410 [2024-07-12 20:45:48.372554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.410 [2024-07-12 20:45:48.372642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.410 [2024-07-12 20:45:48.372668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:54.410 [2024-07-12 20:45:48.372681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:54.410 [2024-07-12 20:45:48.372701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.410 [2024-07-12 20:45:48.374038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.346 ms, result 0 00:29:40.185  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 68/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 137/1024 [MB] (22 MBps) Copying: 159/1024 [MB] (22 MBps) Copying: 181/1024 [MB] (22 MBps) Copying: 203/1024 [MB] (21 MBps) Copying: 226/1024 [MB] (22 MBps) Copying: 249/1024 [MB] (22 MBps) Copying: 270/1024 [MB] (21 MBps) Copying: 293/1024 [MB] (22 MBps) Copying: 315/1024 [MB] (22 MBps) Copying: 337/1024 [MB] (21 MBps) Copying: 360/1024 [MB] (22 MBps) Copying: 382/1024 [MB] (22 MBps) Copying: 404/1024 [MB] (22 MBps) Copying: 427/1024 [MB] (22 MBps) Copying: 449/1024 [MB] (22 MBps) Copying: 470/1024 [MB] (21 MBps) Copying: 493/1024 [MB] (22 MBps) Copying: 515/1024 [MB] (22 MBps) Copying: 537/1024 [MB] (21 MBps) Copying: 559/1024 [MB] (21 MBps) Copying: 580/1024 [MB] (21 MBps) Copying: 601/1024 [MB] (21 MBps) Copying: 622/1024 [MB] (21 MBps) Copying: 644/1024 [MB] (21 MBps) Copying: 664/1024 [MB] (20 MBps) Copying: 686/1024 [MB] (21 MBps) Copying: 708/1024 [MB] (22 MBps) Copying: 731/1024 [MB] (22 MBps) Copying: 755/1024 [MB] (23 MBps) Copying: 778/1024 [MB] (23 MBps) Copying: 802/1024 [MB] (23 MBps) Copying: 825/1024 [MB] (23 MBps) Copying: 849/1024 [MB] (23 MBps) Copying: 872/1024 [MB] (23 MBps) Copying: 895/1024 [MB] (23 MBps) Copying: 919/1024 [MB] (24 MBps) Copying: 944/1024 [MB] (24 MBps) Copying: 968/1024 [MB] (24 MBps) Copying: 992/1024 [MB] (23 MBps) Copying: 1016/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-12 20:46:34.155720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.185 [2024-07-12 20:46:34.156168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:40.185 [2024-07-12 20:46:34.156339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:40.185 [2024-07-12 20:46:34.156493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.185 [2024-07-12 20:46:34.156906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:40.185 [2024-07-12 20:46:34.158269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.185 [2024-07-12 20:46:34.158481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:40.185 [2024-07-12 20:46:34.158601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:29:40.185 [2024-07-12 20:46:34.158674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.185 [2024-07-12 20:46:34.159090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.185 [2024-07-12 20:46:34.159272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:40.185 [2024-07-12 20:46:34.159437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:29:40.185 [2024-07-12 20:46:34.159649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.185 [2024-07-12 20:46:34.159748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.185 [2024-07-12 20:46:34.159792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:40.186 [2024-07-12 20:46:34.159810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:40.186 [2024-07-12 20:46:34.159826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.186 [2024-07-12 20:46:34.159925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.186 [2024-07-12 20:46:34.159969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:40.186 [2024-07-12 20:46:34.159986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:40.186 [2024-07-12 20:46:34.160011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.186 [2024-07-12 20:46:34.160081] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:40.186 [2024-07-12 20:46:34.160104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.160984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:40.186 [2024-07-12 20:46:34.161905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.161917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.161930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.161942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.161955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.161967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.161981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.161993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.162038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.162053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.162067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:40.187 [2024-07-12 20:46:34.162089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:40.187 [2024-07-12 20:46:34.162104] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbe0db69-32e5-416f-a8d2-bf12c6095658 00:29:40.187 [2024-07-12 20:46:34.162119] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:40.187 [2024-07-12 20:46:34.162132] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:40.187 [2024-07-12 20:46:34.162145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:40.187 [2024-07-12 20:46:34.162159] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:40.187 [2024-07-12 20:46:34.162172] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:40.187 [2024-07-12 20:46:34.162186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:40.187 [2024-07-12 20:46:34.162199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:40.187 [2024-07-12 20:46:34.162211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:40.187 [2024-07-12 20:46:34.162224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:40.187 [2024-07-12 20:46:34.162238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.187 [2024-07-12 20:46:34.162260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:40.187 [2024-07-12 20:46:34.162275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:29:40.187 [2024-07-12 20:46:34.162290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.166203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.187 [2024-07-12 20:46:34.166434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:40.187 [2024-07-12 20:46:34.166612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:29:40.187 [2024-07-12 20:46:34.166746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.166979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.187 [2024-07-12 20:46:34.167082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:40.187 [2024-07-12 20:46:34.167303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:29:40.187 [2024-07-12 20:46:34.167431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.177223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.177501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:40.187 [2024-07-12 20:46:34.177631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.177686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.177931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.177995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:40.187 [2024-07-12 20:46:34.178063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.178179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.178331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.178417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:40.187 [2024-07-12 20:46:34.178511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.178567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.178623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.178682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:40.187 [2024-07-12 20:46:34.178724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.178852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.199818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.200235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:40.187 [2024-07-12 20:46:34.200394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.200449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.216090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.216594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:40.187 [2024-07-12 20:46:34.216723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.216858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.217044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.217143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:40.187 [2024-07-12 20:46:34.217320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.217417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.217480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.217501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:40.187 [2024-07-12 20:46:34.217527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.217541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.217631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.217653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:40.187 [2024-07-12 20:46:34.217683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.217696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.217742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.217764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:40.187 [2024-07-12 20:46:34.217777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.217798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.217853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.217872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:40.187 [2024-07-12 20:46:34.217886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.217899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.217962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.187 [2024-07-12 20:46:34.217982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:40.187 [2024-07-12 20:46:34.218048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.187 [2024-07-12 20:46:34.218062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.187 [2024-07-12 20:46:34.218253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 62.491 ms, result 0 00:29:40.575 00:29:40.575 00:29:40.575 20:46:34 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:43.110 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:43.110 20:46:36 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:43.110 [2024-07-12 20:46:36.737837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:43.110 [2024-07-12 20:46:36.738028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99441 ] 00:29:43.110 [2024-07-12 20:46:36.890641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:43.110 [2024-07-12 20:46:36.913316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.110 [2024-07-12 20:46:37.052791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.110 [2024-07-12 20:46:37.216430] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:43.110 [2024-07-12 20:46:37.216894] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:43.369 [2024-07-12 20:46:37.381560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.381675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:43.369 [2024-07-12 20:46:37.381718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:43.369 [2024-07-12 20:46:37.381730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.381816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.381845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:43.369 [2024-07-12 20:46:37.381862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:43.369 [2024-07-12 20:46:37.381893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.381940] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:43.369 [2024-07-12 20:46:37.382309] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:43.369 [2024-07-12 20:46:37.382340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.382353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:43.369 [2024-07-12 20:46:37.382371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:29:43.369 [2024-07-12 20:46:37.382383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.382806] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:43.369 [2024-07-12 20:46:37.382840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.382869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:43.369 [2024-07-12 20:46:37.382882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:43.369 [2024-07-12 20:46:37.382893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.382972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.383009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:43.369 [2024-07-12 20:46:37.383022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:43.369 [2024-07-12 20:46:37.383033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.383425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.383453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:43.369 [2024-07-12 20:46:37.383468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:29:43.369 [2024-07-12 20:46:37.383480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.383569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.383586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:43.369 [2024-07-12 20:46:37.383599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:43.369 [2024-07-12 20:46:37.383651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.383696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.383711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:43.369 [2024-07-12 20:46:37.383724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:43.369 [2024-07-12 20:46:37.383740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.383770] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:43.369 [2024-07-12 20:46:37.386679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.386725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:43.369 [2024-07-12 20:46:37.386762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.916 ms 00:29:43.369 [2024-07-12 20:46:37.386774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.386815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.386853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:43.369 [2024-07-12 20:46:37.386870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:43.369 [2024-07-12 20:46:37.386881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.386948] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:43.369 [2024-07-12 20:46:37.386994] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:43.369 [2024-07-12 20:46:37.387046] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:43.369 [2024-07-12 20:46:37.387066] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:43.369 [2024-07-12 20:46:37.387166] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:43.369 [2024-07-12 20:46:37.387203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:43.369 [2024-07-12 20:46:37.387217] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:43.369 [2024-07-12 20:46:37.387232] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:43.369 [2024-07-12 20:46:37.387245] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:43.369 [2024-07-12 20:46:37.387267] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:43.369 [2024-07-12 20:46:37.387308] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:43.369 [2024-07-12 20:46:37.387323] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:43.369 [2024-07-12 20:46:37.387349] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:43.369 [2024-07-12 20:46:37.387369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.387381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:43.369 [2024-07-12 20:46:37.387393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:29:43.369 [2024-07-12 20:46:37.387408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.387494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.369 [2024-07-12 20:46:37.387509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:43.369 [2024-07-12 20:46:37.387520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:43.369 [2024-07-12 20:46:37.387531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.369 [2024-07-12 20:46:37.387662] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:43.369 [2024-07-12 20:46:37.387681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:43.369 [2024-07-12 20:46:37.387694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:43.369 [2024-07-12 20:46:37.387706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.369 [2024-07-12 20:46:37.387730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:43.369 [2024-07-12 20:46:37.387741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:43.369 [2024-07-12 20:46:37.387752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:43.369 [2024-07-12 20:46:37.387763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:43.369 [2024-07-12 20:46:37.387774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:43.369 [2024-07-12 20:46:37.387785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:43.369 [2024-07-12 20:46:37.387796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:43.369 [2024-07-12 20:46:37.387807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:43.369 [2024-07-12 20:46:37.387817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:43.369 [2024-07-12 20:46:37.387828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:43.370 [2024-07-12 20:46:37.387839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:43.370 [2024-07-12 20:46:37.387852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.370 [2024-07-12 20:46:37.387864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:43.370 [2024-07-12 20:46:37.387888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:43.370 [2024-07-12 20:46:37.387900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.370 [2024-07-12 20:46:37.387911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:43.370 [2024-07-12 20:46:37.387942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:43.370 [2024-07-12 20:46:37.387969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.370 [2024-07-12 20:46:37.387979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:43.370 [2024-07-12 20:46:37.387989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:43.370 [2024-07-12 20:46:37.387999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.370 [2024-07-12 20:46:37.388009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:43.370 [2024-07-12 20:46:37.388018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:43.370 [2024-07-12 20:46:37.388028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.370 [2024-07-12 20:46:37.388038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:43.370 [2024-07-12 20:46:37.388048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:43.370 [2024-07-12 20:46:37.388058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:43.370 [2024-07-12 20:46:37.388068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:43.370 [2024-07-12 20:46:37.388078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:43.370 [2024-07-12 20:46:37.388088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:43.370 [2024-07-12 20:46:37.388098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:43.370 [2024-07-12 20:46:37.388109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:43.370 [2024-07-12 20:46:37.388125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:43.370 [2024-07-12 20:46:37.388136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:43.370 [2024-07-12 20:46:37.388146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:43.370 [2024-07-12 20:46:37.388156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.370 [2024-07-12 20:46:37.388167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:43.370 [2024-07-12 20:46:37.388177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:43.370 [2024-07-12 20:46:37.388188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.370 [2024-07-12 20:46:37.388198] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:43.370 [2024-07-12 20:46:37.388209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:43.370 [2024-07-12 20:46:37.388220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:43.370 [2024-07-12 20:46:37.388231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:43.370 [2024-07-12 20:46:37.388244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:43.370 [2024-07-12 20:46:37.388255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:43.370 [2024-07-12 20:46:37.388266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:43.370 [2024-07-12 20:46:37.388277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:43.370 [2024-07-12 20:46:37.388310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:43.370 [2024-07-12 20:46:37.388328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:43.370 [2024-07-12 20:46:37.388341] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:43.370 [2024-07-12 20:46:37.388355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:43.370 [2024-07-12 20:46:37.388367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:43.370 [2024-07-12 20:46:37.388379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:43.370 [2024-07-12 20:46:37.388390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:43.370 [2024-07-12 20:46:37.388401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:43.370 [2024-07-12 20:46:37.388413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:43.370 [2024-07-12 20:46:37.388423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:43.370 [2024-07-12 20:46:37.388435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:43.370 [2024-07-12 20:46:37.388446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:43.370 [2024-07-12 20:46:37.388457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:43.370 [2024-07-12 20:46:37.388468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:43.370 [2024-07-12 20:46:37.388480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:43.370 [2024-07-12 20:46:37.388490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:43.370 [2024-07-12 20:46:37.388502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:43.370 [2024-07-12 20:46:37.388516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:43.370 [2024-07-12 20:46:37.388528] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:43.370 [2024-07-12 20:46:37.388540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:43.370 [2024-07-12 20:46:37.388552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:43.370 [2024-07-12 20:46:37.388563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:43.370 [2024-07-12 20:46:37.388575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:43.370 [2024-07-12 20:46:37.388586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:43.370 [2024-07-12 20:46:37.388598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.388609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:43.370 [2024-07-12 20:46:37.388621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:29:43.370 [2024-07-12 20:46:37.388637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.419792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.419921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:43.370 [2024-07-12 20:46:37.419977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.077 ms 00:29:43.370 [2024-07-12 20:46:37.420012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.420338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.420386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:43.370 [2024-07-12 20:46:37.420414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:29:43.370 [2024-07-12 20:46:37.420460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.439307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.439412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:43.370 [2024-07-12 20:46:37.439459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.649 ms 00:29:43.370 [2024-07-12 20:46:37.439476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.439574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.439594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:43.370 [2024-07-12 20:46:37.439638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:43.370 [2024-07-12 20:46:37.439663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.439852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.439875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:43.370 [2024-07-12 20:46:37.439907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:43.370 [2024-07-12 20:46:37.439922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.440160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.440183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:43.370 [2024-07-12 20:46:37.440200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:29:43.370 [2024-07-12 20:46:37.440214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.451350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.451452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:43.370 [2024-07-12 20:46:37.451478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.051 ms 00:29:43.370 [2024-07-12 20:46:37.451494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.451807] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:43.370 [2024-07-12 20:46:37.451837] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:43.370 [2024-07-12 20:46:37.451858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.451880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:43.370 [2024-07-12 20:46:37.451896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:43.370 [2024-07-12 20:46:37.451921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.470007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.470117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:43.370 [2024-07-12 20:46:37.470156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.042 ms 00:29:43.370 [2024-07-12 20:46:37.470172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.470477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.370 [2024-07-12 20:46:37.470513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:43.370 [2024-07-12 20:46:37.470532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:29:43.370 [2024-07-12 20:46:37.470554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.370 [2024-07-12 20:46:37.470673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.470707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:43.371 [2024-07-12 20:46:37.470735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:43.371 [2024-07-12 20:46:37.470750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.471512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.471554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:43.371 [2024-07-12 20:46:37.471574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:29:43.371 [2024-07-12 20:46:37.471590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.471662] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:43.371 [2024-07-12 20:46:37.471685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.471701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:43.371 [2024-07-12 20:46:37.471729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:43.371 [2024-07-12 20:46:37.471744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.484755] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:43.371 [2024-07-12 20:46:37.485096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.485137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:43.371 [2024-07-12 20:46:37.485158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.307 ms 00:29:43.371 [2024-07-12 20:46:37.485185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.488556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.488596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:43.371 [2024-07-12 20:46:37.488638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:29:43.371 [2024-07-12 20:46:37.488649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.488822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.488844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:43.371 [2024-07-12 20:46:37.488858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:43.371 [2024-07-12 20:46:37.488871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.488916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.488931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:43.371 [2024-07-12 20:46:37.488943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:43.371 [2024-07-12 20:46:37.488955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.489004] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:43.371 [2024-07-12 20:46:37.489021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.489036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:43.371 [2024-07-12 20:46:37.489052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:43.371 [2024-07-12 20:46:37.489072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.494333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.494378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:43.371 [2024-07-12 20:46:37.494413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.234 ms 00:29:43.371 [2024-07-12 20:46:37.494427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.494526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.371 [2024-07-12 20:46:37.494557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:43.371 [2024-07-12 20:46:37.494571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:43.371 [2024-07-12 20:46:37.494589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.371 [2024-07-12 20:46:37.496221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.058 ms, result 0 00:30:27.873  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 96/1024 [MB] (24 MBps) Copying: 120/1024 [MB] (23 MBps) Copying: 143/1024 [MB] (23 MBps) Copying: 168/1024 [MB] (25 MBps) Copying: 193/1024 [MB] (25 MBps) Copying: 218/1024 [MB] (24 MBps) Copying: 243/1024 [MB] (25 MBps) Copying: 267/1024 [MB] (24 MBps) Copying: 292/1024 [MB] (24 MBps) Copying: 317/1024 [MB] (24 MBps) Copying: 343/1024 [MB] (25 MBps) Copying: 367/1024 [MB] (24 MBps) Copying: 391/1024 [MB] (23 MBps) Copying: 416/1024 [MB] (24 MBps) Copying: 439/1024 [MB] (23 MBps) Copying: 462/1024 [MB] (23 MBps) Copying: 486/1024 [MB] (24 MBps) Copying: 510/1024 [MB] (23 MBps) Copying: 534/1024 [MB] (23 MBps) Copying: 558/1024 [MB] (24 MBps) Copying: 581/1024 [MB] (23 MBps) Copying: 605/1024 [MB] (23 MBps) Copying: 629/1024 [MB] (23 MBps) Copying: 652/1024 [MB] (23 MBps) Copying: 676/1024 [MB] (23 MBps) Copying: 699/1024 [MB] (23 MBps) Copying: 723/1024 [MB] (23 MBps) Copying: 746/1024 [MB] (23 MBps) Copying: 769/1024 [MB] (23 MBps) Copying: 792/1024 [MB] (23 MBps) Copying: 815/1024 [MB] (23 MBps) Copying: 838/1024 [MB] (22 MBps) Copying: 861/1024 [MB] (22 MBps) Copying: 884/1024 [MB] (22 MBps) Copying: 906/1024 [MB] (21 MBps) Copying: 929/1024 [MB] (22 MBps) Copying: 951/1024 [MB] (22 MBps) Copying: 974/1024 [MB] (23 MBps) Copying: 997/1024 [MB] (23 MBps) Copying: 1021/1024 [MB] (23 MBps) Copying: 1048312/1048576 [kB] (2088 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-12 20:47:21.836132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.873 [2024-07-12 20:47:21.836281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:27.873 [2024-07-12 20:47:21.836323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:27.873 [2024-07-12 20:47:21.836350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.873 [2024-07-12 20:47:21.838799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:27.873 [2024-07-12 20:47:21.842441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.873 [2024-07-12 20:47:21.842480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:27.873 [2024-07-12 20:47:21.842525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.579 ms 00:30:27.873 [2024-07-12 20:47:21.842538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.873 [2024-07-12 20:47:21.852006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.873 [2024-07-12 20:47:21.852049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:27.873 [2024-07-12 20:47:21.852077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:30:27.873 [2024-07-12 20:47:21.852088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.873 [2024-07-12 20:47:21.852124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.873 [2024-07-12 20:47:21.852138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:27.873 [2024-07-12 20:47:21.852165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:27.873 [2024-07-12 20:47:21.852183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.873 [2024-07-12 20:47:21.852265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.873 [2024-07-12 20:47:21.852328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:27.873 [2024-07-12 20:47:21.852341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:27.873 [2024-07-12 20:47:21.852352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.873 [2024-07-12 20:47:21.852384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:27.873 [2024-07-12 20:47:21.852418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:30:27.873 [2024-07-12 20:47:21.852433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.852996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.853008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.853020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.853037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:27.873 [2024-07-12 20:47:21.853049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:27.874 [2024-07-12 20:47:21.853750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:27.874 [2024-07-12 20:47:21.853777] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbe0db69-32e5-416f-a8d2-bf12c6095658 00:30:27.874 [2024-07-12 20:47:21.853789] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:30:27.874 [2024-07-12 20:47:21.853800] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:30:27.874 [2024-07-12 20:47:21.853815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:30:27.874 [2024-07-12 20:47:21.853837] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:30:27.874 [2024-07-12 20:47:21.853849] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:27.874 [2024-07-12 20:47:21.853860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:27.874 [2024-07-12 20:47:21.853880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:27.874 [2024-07-12 20:47:21.853891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:27.874 [2024-07-12 20:47:21.853901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:27.874 [2024-07-12 20:47:21.853912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.874 [2024-07-12 20:47:21.853924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:27.874 [2024-07-12 20:47:21.853936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.530 ms 00:30:27.874 [2024-07-12 20:47:21.853947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.856265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.874 [2024-07-12 20:47:21.856295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:27.874 [2024-07-12 20:47:21.856308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:30:27.874 [2024-07-12 20:47:21.856336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.856465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.874 [2024-07-12 20:47:21.856480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:27.874 [2024-07-12 20:47:21.856492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:27.874 [2024-07-12 20:47:21.856503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.863753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.863794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.874 [2024-07-12 20:47:21.863811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.863822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.863881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.863897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.874 [2024-07-12 20:47:21.863909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.863921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.864020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.864039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.874 [2024-07-12 20:47:21.864050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.864061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.864098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.864111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.874 [2024-07-12 20:47:21.864122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.864133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.877987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.878050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.874 [2024-07-12 20:47:21.878100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.878113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.889622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.889709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:27.874 [2024-07-12 20:47:21.889729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.889741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.889818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.889842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:27.874 [2024-07-12 20:47:21.889855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.889867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.874 [2024-07-12 20:47:21.889921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.874 [2024-07-12 20:47:21.889936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:27.874 [2024-07-12 20:47:21.889948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.874 [2024-07-12 20:47:21.889959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.875 [2024-07-12 20:47:21.890047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.875 [2024-07-12 20:47:21.890070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:27.875 [2024-07-12 20:47:21.890083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.875 [2024-07-12 20:47:21.890094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.875 [2024-07-12 20:47:21.890131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.875 [2024-07-12 20:47:21.890148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:27.875 [2024-07-12 20:47:21.890161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.875 [2024-07-12 20:47:21.890172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.875 [2024-07-12 20:47:21.890219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.875 [2024-07-12 20:47:21.890234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:27.875 [2024-07-12 20:47:21.890295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.875 [2024-07-12 20:47:21.890308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.875 [2024-07-12 20:47:21.890380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.875 [2024-07-12 20:47:21.890397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:27.875 [2024-07-12 20:47:21.890410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.875 [2024-07-12 20:47:21.890422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.875 [2024-07-12 20:47:21.890633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 55.766 ms, result 0 00:30:28.443 00:30:28.443 00:30:28.443 20:47:22 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:28.702 [2024-07-12 20:47:22.679398] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:28.702 [2024-07-12 20:47:22.679621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99884 ] 00:30:28.702 [2024-07-12 20:47:22.844621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:28.960 [2024-07-12 20:47:22.864028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.960 [2024-07-12 20:47:22.946749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.960 [2024-07-12 20:47:23.077372] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.960 [2024-07-12 20:47:23.077457] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:29.219 [2024-07-12 20:47:23.237471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.237533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:29.219 [2024-07-12 20:47:23.237571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:29.219 [2024-07-12 20:47:23.237615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.237731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.237756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:29.219 [2024-07-12 20:47:23.237779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:29.219 [2024-07-12 20:47:23.237789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.237819] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:29.219 [2024-07-12 20:47:23.238103] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:29.219 [2024-07-12 20:47:23.238128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.238143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:29.219 [2024-07-12 20:47:23.238156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:30:29.219 [2024-07-12 20:47:23.238176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.238710] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:29.219 [2024-07-12 20:47:23.238755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.238775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:29.219 [2024-07-12 20:47:23.238789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:29.219 [2024-07-12 20:47:23.238800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.238858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.238890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:29.219 [2024-07-12 20:47:23.238902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:29.219 [2024-07-12 20:47:23.238912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.239377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.239402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:29.219 [2024-07-12 20:47:23.239416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:30:29.219 [2024-07-12 20:47:23.239428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.239526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.239546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:29.219 [2024-07-12 20:47:23.239560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:29.219 [2024-07-12 20:47:23.239570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.239620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.239650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:29.219 [2024-07-12 20:47:23.239681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:29.219 [2024-07-12 20:47:23.239697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.239735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:29.219 [2024-07-12 20:47:23.242251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.242348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:29.219 [2024-07-12 20:47:23.242365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.530 ms 00:30:29.219 [2024-07-12 20:47:23.242377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.242443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.219 [2024-07-12 20:47:23.242469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:29.219 [2024-07-12 20:47:23.242486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:29.219 [2024-07-12 20:47:23.242509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.219 [2024-07-12 20:47:23.242583] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:29.220 [2024-07-12 20:47:23.242632] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:29.220 [2024-07-12 20:47:23.242690] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:29.220 [2024-07-12 20:47:23.242734] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:29.220 [2024-07-12 20:47:23.242826] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:29.220 [2024-07-12 20:47:23.242847] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:29.220 [2024-07-12 20:47:23.242860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:29.220 [2024-07-12 20:47:23.242883] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:29.220 [2024-07-12 20:47:23.242895] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:29.220 [2024-07-12 20:47:23.242906] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:29.220 [2024-07-12 20:47:23.242924] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:29.220 [2024-07-12 20:47:23.242934] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:29.220 [2024-07-12 20:47:23.242944] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:29.220 [2024-07-12 20:47:23.242954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.220 [2024-07-12 20:47:23.242967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:29.220 [2024-07-12 20:47:23.242978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:30:29.220 [2024-07-12 20:47:23.242992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.220 [2024-07-12 20:47:23.243092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.220 [2024-07-12 20:47:23.243106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:29.220 [2024-07-12 20:47:23.243118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:29.220 [2024-07-12 20:47:23.243128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.220 [2024-07-12 20:47:23.243241] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:29.220 [2024-07-12 20:47:23.243294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:29.220 [2024-07-12 20:47:23.243323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:29.220 [2024-07-12 20:47:23.243382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:29.220 [2024-07-12 20:47:23.243418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.220 [2024-07-12 20:47:23.243440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:29.220 [2024-07-12 20:47:23.243450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:29.220 [2024-07-12 20:47:23.243461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.220 [2024-07-12 20:47:23.243471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:29.220 [2024-07-12 20:47:23.243482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:29.220 [2024-07-12 20:47:23.243493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:29.220 [2024-07-12 20:47:23.243526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:29.220 [2024-07-12 20:47:23.243573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:29.220 [2024-07-12 20:47:23.243607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:29.220 [2024-07-12 20:47:23.243677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:29.220 [2024-07-12 20:47:23.243710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:29.220 [2024-07-12 20:47:23.243741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.220 [2024-07-12 20:47:23.243762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:29.220 [2024-07-12 20:47:23.243773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:29.220 [2024-07-12 20:47:23.243784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.220 [2024-07-12 20:47:23.243794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:29.220 [2024-07-12 20:47:23.243804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:29.220 [2024-07-12 20:47:23.243815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:29.220 [2024-07-12 20:47:23.243840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:29.220 [2024-07-12 20:47:23.243852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243862] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:29.220 [2024-07-12 20:47:23.243874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:29.220 [2024-07-12 20:47:23.243885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.220 [2024-07-12 20:47:23.243897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.220 [2024-07-12 20:47:23.243919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:29.220 [2024-07-12 20:47:23.243930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:29.220 [2024-07-12 20:47:23.243984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:29.220 [2024-07-12 20:47:23.244010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:29.220 [2024-07-12 20:47:23.244019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:29.220 [2024-07-12 20:47:23.244029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:29.220 [2024-07-12 20:47:23.244041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:29.220 [2024-07-12 20:47:23.244054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.220 [2024-07-12 20:47:23.244077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:29.220 [2024-07-12 20:47:23.244096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:29.220 [2024-07-12 20:47:23.244108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:29.220 [2024-07-12 20:47:23.244118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:29.220 [2024-07-12 20:47:23.244128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:29.220 [2024-07-12 20:47:23.244138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:29.220 [2024-07-12 20:47:23.244148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:29.220 [2024-07-12 20:47:23.244158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:29.220 [2024-07-12 20:47:23.244168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:29.220 [2024-07-12 20:47:23.244178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:29.220 [2024-07-12 20:47:23.244188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:29.220 [2024-07-12 20:47:23.244198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:29.220 [2024-07-12 20:47:23.244208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:29.220 [2024-07-12 20:47:23.244218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:29.220 [2024-07-12 20:47:23.244228] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:29.220 [2024-07-12 20:47:23.244256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.220 [2024-07-12 20:47:23.244302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:29.220 [2024-07-12 20:47:23.244318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:29.220 [2024-07-12 20:47:23.244331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:29.220 [2024-07-12 20:47:23.244343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:29.220 [2024-07-12 20:47:23.244380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.220 [2024-07-12 20:47:23.244401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:29.220 [2024-07-12 20:47:23.244412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:30:29.220 [2024-07-12 20:47:23.244428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.220 [2024-07-12 20:47:23.270854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.220 [2024-07-12 20:47:23.270936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:29.220 [2024-07-12 20:47:23.270963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.353 ms 00:30:29.220 [2024-07-12 20:47:23.270978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.220 [2024-07-12 20:47:23.271125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.271157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:29.221 [2024-07-12 20:47:23.271173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:29.221 [2024-07-12 20:47:23.271187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.285503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.285565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:29.221 [2024-07-12 20:47:23.285588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.158 ms 00:30:29.221 [2024-07-12 20:47:23.285619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.285687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.285708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:29.221 [2024-07-12 20:47:23.285734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:29.221 [2024-07-12 20:47:23.285753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.285919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.285943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:29.221 [2024-07-12 20:47:23.285974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:30:29.221 [2024-07-12 20:47:23.285988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.286179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.286202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:29.221 [2024-07-12 20:47:23.286229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:30:29.221 [2024-07-12 20:47:23.286273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.294880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.294937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:29.221 [2024-07-12 20:47:23.294973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.562 ms 00:30:29.221 [2024-07-12 20:47:23.294985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.295145] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:29.221 [2024-07-12 20:47:23.295168] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:29.221 [2024-07-12 20:47:23.295183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.295199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:29.221 [2024-07-12 20:47:23.295225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:30:29.221 [2024-07-12 20:47:23.295251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.309405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.309437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:29.221 [2024-07-12 20:47:23.309469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.075 ms 00:30:29.221 [2024-07-12 20:47:23.309480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.309659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.309701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:29.221 [2024-07-12 20:47:23.309712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:30:29.221 [2024-07-12 20:47:23.309726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.309792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.309821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:29.221 [2024-07-12 20:47:23.309833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:29.221 [2024-07-12 20:47:23.309843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.310161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.310178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:29.221 [2024-07-12 20:47:23.310190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:30:29.221 [2024-07-12 20:47:23.310199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.310229] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:29.221 [2024-07-12 20:47:23.310260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.310286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:29.221 [2024-07-12 20:47:23.310298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:29.221 [2024-07-12 20:47:23.310309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.320696] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:29.221 [2024-07-12 20:47:23.320963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.320986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:29.221 [2024-07-12 20:47:23.320999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.630 ms 00:30:29.221 [2024-07-12 20:47:23.321009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.324077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.324127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:29.221 [2024-07-12 20:47:23.324142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:30:29.221 [2024-07-12 20:47:23.324154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.324275] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:29.221 [2024-07-12 20:47:23.325075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.325126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:29.221 [2024-07-12 20:47:23.325148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:30:29.221 [2024-07-12 20:47:23.325164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.325213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.325234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:29.221 [2024-07-12 20:47:23.325287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:29.221 [2024-07-12 20:47:23.325303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.325380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:29.221 [2024-07-12 20:47:23.325405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.325427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:29.221 [2024-07-12 20:47:23.325444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:29.221 [2024-07-12 20:47:23.325460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.330844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.330884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:29.221 [2024-07-12 20:47:23.330916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.346 ms 00:30:29.221 [2024-07-12 20:47:23.330927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.331013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.221 [2024-07-12 20:47:23.331029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:29.221 [2024-07-12 20:47:23.331047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:29.221 [2024-07-12 20:47:23.331056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.221 [2024-07-12 20:47:23.340654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.992 ms, result 0 00:31:10.400  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 166/1024 [MB] (24 MBps) Copying: 191/1024 [MB] (24 MBps) Copying: 214/1024 [MB] (23 MBps) Copying: 239/1024 [MB] (24 MBps) Copying: 264/1024 [MB] (25 MBps) Copying: 289/1024 [MB] (25 MBps) Copying: 315/1024 [MB] (25 MBps) Copying: 340/1024 [MB] (25 MBps) Copying: 366/1024 [MB] (25 MBps) Copying: 391/1024 [MB] (25 MBps) Copying: 416/1024 [MB] (24 MBps) Copying: 442/1024 [MB] (26 MBps) Copying: 467/1024 [MB] (25 MBps) Copying: 493/1024 [MB] (25 MBps) Copying: 518/1024 [MB] (25 MBps) Copying: 543/1024 [MB] (25 MBps) Copying: 568/1024 [MB] (24 MBps) Copying: 594/1024 [MB] (25 MBps) Copying: 620/1024 [MB] (26 MBps) Copying: 646/1024 [MB] (26 MBps) Copying: 672/1024 [MB] (25 MBps) Copying: 698/1024 [MB] (26 MBps) Copying: 724/1024 [MB] (25 MBps) Copying: 749/1024 [MB] (24 MBps) Copying: 775/1024 [MB] (25 MBps) Copying: 801/1024 [MB] (26 MBps) Copying: 827/1024 [MB] (25 MBps) Copying: 853/1024 [MB] (26 MBps) Copying: 878/1024 [MB] (25 MBps) Copying: 905/1024 [MB] (26 MBps) Copying: 930/1024 [MB] (25 MBps) Copying: 956/1024 [MB] (26 MBps) Copying: 982/1024 [MB] (25 MBps) Copying: 1008/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-12 20:48:04.426403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.400 [2024-07-12 20:48:04.426515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:10.400 [2024-07-12 20:48:04.426542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:10.400 [2024-07-12 20:48:04.426567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.400 [2024-07-12 20:48:04.426608] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:10.400 [2024-07-12 20:48:04.427899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.400 [2024-07-12 20:48:04.427935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:10.400 [2024-07-12 20:48:04.427962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:31:10.400 [2024-07-12 20:48:04.427975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.400 [2024-07-12 20:48:04.428278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.400 [2024-07-12 20:48:04.428314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:10.400 [2024-07-12 20:48:04.428331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:31:10.400 [2024-07-12 20:48:04.428342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.400 [2024-07-12 20:48:04.428381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.400 [2024-07-12 20:48:04.428395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:10.400 [2024-07-12 20:48:04.428407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:10.400 [2024-07-12 20:48:04.428425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.400 [2024-07-12 20:48:04.428498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.400 [2024-07-12 20:48:04.428515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:10.400 [2024-07-12 20:48:04.428528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:10.400 [2024-07-12 20:48:04.428539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.400 [2024-07-12 20:48:04.428561] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:10.400 [2024-07-12 20:48:04.428579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133376 / 261120 wr_cnt: 1 state: open 00:31:10.400 [2024-07-12 20:48:04.428608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.400 [2024-07-12 20:48:04.428799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.428995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:10.401 [2024-07-12 20:48:04.429785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:10.402 [2024-07-12 20:48:04.429973] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:10.402 [2024-07-12 20:48:04.429985] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbe0db69-32e5-416f-a8d2-bf12c6095658 00:31:10.402 [2024-07-12 20:48:04.430014] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133376 00:31:10.402 [2024-07-12 20:48:04.430031] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4640 00:31:10.402 [2024-07-12 20:48:04.430043] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4608 00:31:10.402 [2024-07-12 20:48:04.430055] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0069 00:31:10.402 [2024-07-12 20:48:04.430066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:10.402 [2024-07-12 20:48:04.430078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:10.402 [2024-07-12 20:48:04.430089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:10.402 [2024-07-12 20:48:04.430100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:10.402 [2024-07-12 20:48:04.430110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:10.402 [2024-07-12 20:48:04.430121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.402 [2024-07-12 20:48:04.430142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:10.402 [2024-07-12 20:48:04.430155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:31:10.402 [2024-07-12 20:48:04.430166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.433146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.402 [2024-07-12 20:48:04.433191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:10.402 [2024-07-12 20:48:04.433213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:31:10.402 [2024-07-12 20:48:04.433224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.433411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.402 [2024-07-12 20:48:04.433428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:10.402 [2024-07-12 20:48:04.433442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:31:10.402 [2024-07-12 20:48:04.433453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.445106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.445466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.402 [2024-07-12 20:48:04.445604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.445656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.445894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.446049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.402 [2024-07-12 20:48:04.446180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.446387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.446543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.446623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.402 [2024-07-12 20:48:04.446752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.446775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.446806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.446821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.402 [2024-07-12 20:48:04.446834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.446846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.471656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.472074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.402 [2024-07-12 20:48:04.472108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.472123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.487398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.487512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.402 [2024-07-12 20:48:04.487533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.487545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.487683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.487702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:10.402 [2024-07-12 20:48:04.487715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.487728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.487792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.487807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:10.402 [2024-07-12 20:48:04.487819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.487831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.487913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.487938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:10.402 [2024-07-12 20:48:04.487950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.487962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.487999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.488033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:10.402 [2024-07-12 20:48:04.488056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.488093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.488148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.488170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:10.402 [2024-07-12 20:48:04.488183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.488194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.488273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.402 [2024-07-12 20:48:04.488289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:10.402 [2024-07-12 20:48:04.488327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.402 [2024-07-12 20:48:04.488342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.402 [2024-07-12 20:48:04.488583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 62.104 ms, result 0 00:31:10.969 00:31:10.969 00:31:10.969 20:48:04 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:13.502 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 98284 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 98284 ']' 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 98284 00:31:13.502 Process with pid 98284 is not found 00:31:13.502 Remove shared memory files 00:31:13.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (98284) - No such process 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 98284 is not found' 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_band_md /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_l2p_l1 /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_l2p_l2 /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_l2p_l2_ctx /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_nvc_md /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_p2l_pool /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_sb /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_sb_shm /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_trim_bitmap /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_trim_log /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_trim_md /dev/hugepages/ftl_dbe0db69-32e5-416f-a8d2-bf12c6095658_vmap 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:13.502 00:31:13.502 real 3m25.919s 00:31:13.502 user 3m10.808s 00:31:13.502 sys 0m17.287s 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:13.502 ************************************ 00:31:13.502 END TEST ftl_restore_fast 00:31:13.502 ************************************ 00:31:13.502 20:48:07 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@1142 -- # return 0 00:31:13.502 20:48:07 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:13.502 20:48:07 ftl -- ftl/ftl.sh@14 -- # killprocess 90977 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@948 -- # '[' -z 90977 ']' 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@952 -- # kill -0 90977 00:31:13.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (90977) - No such process 00:31:13.502 Process with pid 90977 is not found 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 90977 is not found' 00:31:13.502 20:48:07 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:13.502 20:48:07 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=100340 00:31:13.502 20:48:07 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:13.502 20:48:07 ftl -- ftl/ftl.sh@20 -- # waitforlisten 100340 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@829 -- # '[' -z 100340 ']' 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:13.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:13.502 20:48:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:13.502 [2024-07-12 20:48:07.415042] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:31:13.502 [2024-07-12 20:48:07.415259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100340 ] 00:31:13.502 [2024-07-12 20:48:07.559311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:13.502 [2024-07-12 20:48:07.580908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.761 [2024-07-12 20:48:07.705983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.354 20:48:08 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:14.354 20:48:08 ftl -- common/autotest_common.sh@862 -- # return 0 00:31:14.354 20:48:08 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:14.623 nvme0n1 00:31:14.623 20:48:08 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:14.623 20:48:08 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:14.623 20:48:08 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:14.882 20:48:08 ftl -- ftl/common.sh@28 -- # stores=e66f2dbf-658c-491c-b478-7f4bfdad11b0 00:31:14.882 20:48:08 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:14.882 20:48:08 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e66f2dbf-658c-491c-b478-7f4bfdad11b0 00:31:15.141 20:48:09 ftl -- ftl/ftl.sh@23 -- # killprocess 100340 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@948 -- # '[' -z 100340 ']' 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@952 -- # kill -0 100340 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@953 -- # uname 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100340 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:15.142 killing process with pid 100340 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100340' 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@967 -- # kill 100340 00:31:15.142 20:48:09 ftl -- common/autotest_common.sh@972 -- # wait 100340 00:31:15.710 20:48:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:15.970 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:15.970 Waiting for block devices as requested 00:31:16.229 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:16.229 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:16.229 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:16.488 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:21.759 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:21.759 20:48:15 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:21.759 Remove shared memory files 00:31:21.759 20:48:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:21.759 20:48:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:21.759 20:48:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:21.759 20:48:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:21.759 20:48:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:21.759 20:48:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:21.759 00:31:21.759 real 14m10.300s 00:31:21.759 user 16m21.331s 00:31:21.759 sys 1m52.580s 00:31:21.759 20:48:15 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:21.759 20:48:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:21.759 ************************************ 00:31:21.759 END TEST ftl 00:31:21.759 ************************************ 00:31:21.759 20:48:15 -- common/autotest_common.sh@1142 -- # return 0 00:31:21.759 20:48:15 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:21.759 20:48:15 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:21.759 20:48:15 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:21.759 20:48:15 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:21.759 20:48:15 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:21.759 20:48:15 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:21.759 20:48:15 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:21.759 20:48:15 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:21.759 20:48:15 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:21.759 20:48:15 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:21.759 20:48:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:21.759 20:48:15 -- common/autotest_common.sh@10 -- # set +x 00:31:21.759 20:48:15 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:21.759 20:48:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:21.759 20:48:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:21.759 20:48:15 -- common/autotest_common.sh@10 -- # set +x 00:31:23.165 INFO: APP EXITING 00:31:23.165 INFO: killing all VMs 00:31:23.165 INFO: killing vhost app 00:31:23.165 INFO: EXIT DONE 00:31:23.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:23.993 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:23.993 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:23.993 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:23.993 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:24.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:24.819 Cleaning 00:31:24.819 Removing: /var/run/dpdk/spdk0/config 00:31:24.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:24.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:24.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:24.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:24.819 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:24.819 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:24.819 Removing: /var/run/dpdk/spdk0 00:31:24.819 Removing: /var/run/dpdk/spdk_pid100340 00:31:24.819 Removing: /var/run/dpdk/spdk_pid75303 00:31:24.819 Removing: /var/run/dpdk/spdk_pid75464 00:31:24.820 Removing: /var/run/dpdk/spdk_pid75660 00:31:24.820 Removing: /var/run/dpdk/spdk_pid75748 00:31:24.820 Removing: /var/run/dpdk/spdk_pid75777 00:31:24.820 Removing: /var/run/dpdk/spdk_pid75896 00:31:24.820 Removing: /var/run/dpdk/spdk_pid75914 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76071 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76138 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76215 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76307 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76385 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76419 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76455 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76518 00:31:24.820 Removing: /var/run/dpdk/spdk_pid76613 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77053 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77106 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77156 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77169 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77243 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77259 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77328 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77349 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77392 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77410 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77456 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77470 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77600 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77642 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77712 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77771 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77791 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77858 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77899 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77929 00:31:24.820 Removing: /var/run/dpdk/spdk_pid77970 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78006 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78041 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78077 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78112 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78148 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78189 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78219 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78260 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78290 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78331 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78365 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78402 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78443 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78476 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78520 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78550 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78592 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78663 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78756 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78902 00:31:24.820 Removing: /var/run/dpdk/spdk_pid78975 00:31:24.820 Removing: /var/run/dpdk/spdk_pid79006 00:31:24.820 Removing: /var/run/dpdk/spdk_pid79454 00:31:24.820 Removing: /var/run/dpdk/spdk_pid79541 00:31:25.079 Removing: /var/run/dpdk/spdk_pid79645 00:31:25.079 Removing: /var/run/dpdk/spdk_pid79687 00:31:25.079 Removing: /var/run/dpdk/spdk_pid79707 00:31:25.079 Removing: /var/run/dpdk/spdk_pid79783 00:31:25.079 Removing: /var/run/dpdk/spdk_pid80396 00:31:25.079 Removing: /var/run/dpdk/spdk_pid80427 00:31:25.079 Removing: /var/run/dpdk/spdk_pid80920 00:31:25.079 Removing: /var/run/dpdk/spdk_pid81007 00:31:25.079 Removing: /var/run/dpdk/spdk_pid81105 00:31:25.079 Removing: /var/run/dpdk/spdk_pid81153 00:31:25.079 Removing: /var/run/dpdk/spdk_pid81178 00:31:25.079 Removing: /var/run/dpdk/spdk_pid81208 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83041 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83162 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83170 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83189 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83230 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83234 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83246 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83291 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83295 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83307 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83352 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83356 00:31:25.079 Removing: /var/run/dpdk/spdk_pid83368 00:31:25.079 Removing: /var/run/dpdk/spdk_pid84718 00:31:25.079 Removing: /var/run/dpdk/spdk_pid84796 00:31:25.079 Removing: /var/run/dpdk/spdk_pid86187 00:31:25.079 Removing: /var/run/dpdk/spdk_pid87535 00:31:25.079 Removing: /var/run/dpdk/spdk_pid87618 00:31:25.079 Removing: /var/run/dpdk/spdk_pid87705 00:31:25.079 Removing: /var/run/dpdk/spdk_pid87788 00:31:25.079 Removing: /var/run/dpdk/spdk_pid87893 00:31:25.079 Removing: /var/run/dpdk/spdk_pid87962 00:31:25.079 Removing: /var/run/dpdk/spdk_pid88091 00:31:25.079 Removing: /var/run/dpdk/spdk_pid88437 00:31:25.079 Removing: /var/run/dpdk/spdk_pid88468 00:31:25.079 Removing: /var/run/dpdk/spdk_pid88918 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89094 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89183 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89282 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89326 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89350 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89628 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89665 00:31:25.079 Removing: /var/run/dpdk/spdk_pid89717 00:31:25.079 Removing: /var/run/dpdk/spdk_pid90060 00:31:25.079 Removing: /var/run/dpdk/spdk_pid90199 00:31:25.079 Removing: /var/run/dpdk/spdk_pid90977 00:31:25.079 Removing: /var/run/dpdk/spdk_pid91090 00:31:25.079 Removing: /var/run/dpdk/spdk_pid91266 00:31:25.079 Removing: /var/run/dpdk/spdk_pid91352 00:31:25.079 Removing: /var/run/dpdk/spdk_pid91701 00:31:25.079 Removing: /var/run/dpdk/spdk_pid91960 00:31:25.079 Removing: /var/run/dpdk/spdk_pid92307 00:31:25.079 Removing: /var/run/dpdk/spdk_pid92491 00:31:25.079 Removing: /var/run/dpdk/spdk_pid92617 00:31:25.079 Removing: /var/run/dpdk/spdk_pid92654 00:31:25.079 Removing: /var/run/dpdk/spdk_pid92780 00:31:25.079 Removing: /var/run/dpdk/spdk_pid92795 00:31:25.079 Removing: /var/run/dpdk/spdk_pid92837 00:31:25.079 Removing: /var/run/dpdk/spdk_pid93018 00:31:25.079 Removing: /var/run/dpdk/spdk_pid93238 00:31:25.079 Removing: /var/run/dpdk/spdk_pid93629 00:31:25.079 Removing: /var/run/dpdk/spdk_pid94050 00:31:25.079 Removing: /var/run/dpdk/spdk_pid94469 00:31:25.079 Removing: /var/run/dpdk/spdk_pid94962 00:31:25.079 Removing: /var/run/dpdk/spdk_pid95099 00:31:25.079 Removing: /var/run/dpdk/spdk_pid95196 00:31:25.079 Removing: /var/run/dpdk/spdk_pid95827 00:31:25.079 Removing: /var/run/dpdk/spdk_pid95901 00:31:25.079 Removing: /var/run/dpdk/spdk_pid96376 00:31:25.079 Removing: /var/run/dpdk/spdk_pid96786 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97296 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97422 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97456 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97514 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97570 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97632 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97822 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97884 00:31:25.079 Removing: /var/run/dpdk/spdk_pid97951 00:31:25.079 Removing: /var/run/dpdk/spdk_pid98024 00:31:25.079 Removing: /var/run/dpdk/spdk_pid98053 00:31:25.337 Removing: /var/run/dpdk/spdk_pid98137 00:31:25.337 Removing: /var/run/dpdk/spdk_pid98284 00:31:25.337 Removing: /var/run/dpdk/spdk_pid98511 00:31:25.337 Removing: /var/run/dpdk/spdk_pid98965 00:31:25.337 Removing: /var/run/dpdk/spdk_pid99441 00:31:25.337 Removing: /var/run/dpdk/spdk_pid99884 00:31:25.337 Clean 00:31:25.337 20:48:19 -- common/autotest_common.sh@1451 -- # return 0 00:31:25.337 20:48:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:25.337 20:48:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.337 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.337 20:48:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:25.337 20:48:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.337 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.337 20:48:19 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:25.337 20:48:19 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:25.337 20:48:19 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:25.337 20:48:19 -- spdk/autotest.sh@391 -- # hash lcov 00:31:25.337 20:48:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:25.337 20:48:19 -- spdk/autotest.sh@393 -- # hostname 00:31:25.337 20:48:19 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:25.595 geninfo: WARNING: invalid characters removed from testname! 00:31:52.135 20:48:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:54.666 20:48:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:57.196 20:48:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:00.478 20:48:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:03.008 20:48:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:05.547 20:48:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:08.080 20:49:01 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:08.081 20:49:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:08.081 20:49:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:08.081 20:49:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.081 20:49:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.081 20:49:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.081 20:49:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.081 20:49:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.081 20:49:01 -- paths/export.sh@5 -- $ export PATH 00:32:08.081 20:49:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.081 20:49:01 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:08.081 20:49:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:32:08.081 20:49:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720817341.XXXXXX 00:32:08.081 20:49:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720817341.1SILRH 00:32:08.081 20:49:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:32:08.081 20:49:01 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:32:08.081 20:49:01 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:08.081 20:49:01 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:08.081 20:49:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:08.081 20:49:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:08.081 20:49:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:32:08.081 20:49:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:08.081 20:49:01 -- common/autotest_common.sh@10 -- $ set +x 00:32:08.081 20:49:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:32:08.081 20:49:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:32:08.081 20:49:01 -- pm/common@17 -- $ local monitor 00:32:08.081 20:49:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:08.081 20:49:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:08.081 20:49:01 -- pm/common@25 -- $ sleep 1 00:32:08.081 20:49:01 -- pm/common@21 -- $ date +%s 00:32:08.081 20:49:01 -- pm/common@21 -- $ date +%s 00:32:08.081 20:49:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720817341 00:32:08.081 20:49:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720817341 00:32:08.081 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720817341_collect-vmstat.pm.log 00:32:08.081 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720817341_collect-cpu-load.pm.log 00:32:09.017 20:49:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:32:09.017 20:49:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:09.017 20:49:02 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:09.017 20:49:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:09.017 20:49:02 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:09.017 20:49:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:09.017 20:49:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:09.017 20:49:02 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:09.017 20:49:02 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:09.017 20:49:02 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:09.017 20:49:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:09.017 20:49:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:09.017 20:49:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:09.017 20:49:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:09.017 20:49:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:09.017 20:49:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:09.017 20:49:03 -- pm/common@44 -- $ pid=102055 00:32:09.017 20:49:03 -- pm/common@50 -- $ kill -TERM 102055 00:32:09.017 20:49:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:09.017 20:49:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:09.017 20:49:03 -- pm/common@44 -- $ pid=102056 00:32:09.017 20:49:03 -- pm/common@50 -- $ kill -TERM 102056 00:32:09.017 + [[ -n 5887 ]] 00:32:09.017 + sudo kill 5887 00:32:09.025 [Pipeline] } 00:32:09.043 [Pipeline] // timeout 00:32:09.048 [Pipeline] } 00:32:09.063 [Pipeline] // stage 00:32:09.068 [Pipeline] } 00:32:09.083 [Pipeline] // catchError 00:32:09.092 [Pipeline] stage 00:32:09.094 [Pipeline] { (Stop VM) 00:32:09.106 [Pipeline] sh 00:32:09.380 + vagrant halt 00:32:12.750 ==> default: Halting domain... 00:32:18.034 [Pipeline] sh 00:32:18.313 + vagrant destroy -f 00:32:22.552 ==> default: Removing domain... 00:32:22.821 [Pipeline] sh 00:32:23.098 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:23.106 [Pipeline] } 00:32:23.125 [Pipeline] // stage 00:32:23.131 [Pipeline] } 00:32:23.148 [Pipeline] // dir 00:32:23.153 [Pipeline] } 00:32:23.170 [Pipeline] // wrap 00:32:23.176 [Pipeline] } 00:32:23.192 [Pipeline] // catchError 00:32:23.201 [Pipeline] stage 00:32:23.204 [Pipeline] { (Epilogue) 00:32:23.220 [Pipeline] sh 00:32:23.500 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:30.089 [Pipeline] catchError 00:32:30.091 [Pipeline] { 00:32:30.101 [Pipeline] sh 00:32:30.378 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:30.378 Artifacts sizes are good 00:32:30.387 [Pipeline] } 00:32:30.400 [Pipeline] // catchError 00:32:30.408 [Pipeline] archiveArtifacts 00:32:30.413 Archiving artifacts 00:32:30.552 [Pipeline] cleanWs 00:32:30.563 [WS-CLEANUP] Deleting project workspace... 00:32:30.563 [WS-CLEANUP] Deferred wipeout is used... 00:32:30.569 [WS-CLEANUP] done 00:32:30.570 [Pipeline] } 00:32:30.584 [Pipeline] // stage 00:32:30.587 [Pipeline] } 00:32:30.598 [Pipeline] // node 00:32:30.602 [Pipeline] End of Pipeline 00:32:30.635 Finished: SUCCESS